Picking through the haystack -- the role of AI in cyber security [Q&A]

AI security

Over the past year or so the idea of using artificial intelligence as an aid to cyber security has gained a lot of support.

But what role does AI and machine learning have, and what will the future of security look like when it's in widespread use? We spoke to Gene Stevens, co-founder and CTO of network security company ProtectWise to find out.

BN: What's the background to ProtectWise?

GS: ProtectWise is about five years old, we've been working on bringing enterprise security to large scale, Fortune 2000 businesses. We've found a way of shifting the core analytic security functionality to the cloud, via an on-demand service.

We've come to the conclusion that the number one challenge in cyber security was one of human resources rather than technology. We've used our cloud functionality as a way of changing the way people use and set up their security solutions. We're mostly focused on networks, shipping traffic to the cloud for analysis. We also store data packets for an unlimited time, this lets us go back in time when there’s a shift in the threat landscape and use the intelligence to discover the previously unknown.

BN: How does machine learning and AI help?

GS: AI is one of a number of different approaches used to identify and support the investigation of attacks. It allows what we call a 'hierarchy of expert systems' approach to detection. This detection in depth uses many different expert systems in concert with each other and correlates the output to imitate what a human would do when determining which threats are most important.

Machine learning is only one of these expert systems. Combining data from many different systems parallels what humans do when they get together in a group. If they all have different experiences and backgrounds and perspectives on the situation, but begin to agree on a course of action then you can be pretty sure that you’re onto something. Similarly if a team wildly disagrees then you know you maybe need more time or more data. Automating the early stages of this helps make better use of humans' time, by scoring intelligence.

BN: So, you're improving the quality of alerts that reach a human operator?

GS: Absolutely, that is the main take away from the use of AI. From a service perspective you have many different detection systems reporting independently. You might have 10,000 alarms a day on large networks, AI and ML can provide a highly condensed version that imitates what humans would do if they had the time and resources.

Of course machine learning is only as good as the data that goes into the system. You need a massive bank of data behind it, plus intelligence is still largely a human research effort. It's a result of researchers looking at what current actor groups are working on, what techniques and so on, and developing a sense of how they work. This is coupled with a sense of how internal threats work as well as patterns and information shared within industries, across working groups, etc.

AI isn't about replacing human responders its about serving them better so they can do the things they're really good at. Humans are super creative, they excel at high level understanding and context, finding out which things matter and which don’t. But they can’t read thousands of lines of alarms quickly and easily in the way a machine can. Machines are good at scanning large amounts of data, spotting patterns, and they’re really easy to scale. Machine learning is about doing the things humans would ideally do but are either too massive or repetitive. It allows you to pick through the haystack at a high scale and spot things that would be too tedious for humans to identify.

BN: One of the things we hear a lot at the moment is that security teams are overwhelmed by the volume of alerts. Is machine learning the answer?

GS: That's a huge element, no matter how big the security team it's impossible for large organizations to stay on top of all the alerts they get. Automating that could mean different classes of detection, it could be traditional behavioral analysis -- spotting activity outside the norm. If you can tie all these forms of detection together it's easier to see when something really needs investigating.

Machine learning has unsupervised elements such as classification and grouping of like items, but it also has human driven elements, where an analyst has flagged something to the system. This semi-supervised approach helps make clearer sense of the intelligence landscape, helping weed out noise and false positives

BN: Is there a risk of an AI arms race developing where the bad guys also start using it to find weaknesses?

GS: I do believe that will start happening, but it will probably take some time for that to happen at any scale. One of the most misleading things in the market is the hype around AI transforming the way threat actors work. At the moment there's not that much AI on the offense side. The most sophisticated attacks out there at the moment are deeply human, they're working with strong organizational knowledge gained in traditional ways from existing employees, social engineering, rogue actors, etc. This is coupled with a knowledge of the correct communication patterns, so attacks are more likely to succeed. AI will play a role in automating the attack as well as the defense, but most of the major risks will come from a non-AI approach and there won't quite be the arms race doomsday that many people are afraid of.

BN: Is that partly because many of the weaknesses are on the inside?

GS: Absolutely. Why create a sophisticated mechanism to break in through the third-floor window when the front door is wide open?

Also a big part of the ML approach is looking at attack method discovery. Analyzing headers for example to spot what may be a malicious activity without needing to look at the payload. This includes identifying encrypted traffic and looking for patterns, certificate names, issuers and so on that can identify malware groups and threat actors from traffic patterns without having to look at the payload.

BN: Will we see AI become part of the security mainstream in the near future?

GS: Within the next couple of years, yes. Most large organizations, certainly the ones that we work with, are very focused on finding ways to use AI to protect their own architecture. I believe that most of it will be pretty underwhelming though. There will be a lot of false starts, a lot of time and resources, a lot of expensive learning. There will be a lot of excitement, but I am optimistic that over the long arc of this it's going to pay off.

Image Credit: agsandrew / depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.