Why AI isn't the only answer to cybersecurity [Q&A]
Read about any new cybersecurity product today and the chances are that it will be keen to stress its use of AI in some form.
But are we expecting too much from AI and are companies adopting it just because it's on trend? We spoke to Nadav Arbel, co-founder and CEO of managed SOC platform CYREBRO, to find out more about AI's role and why the human factor is still important.
BN: Is the security industry placing too much faith in AI?
NA: Many things from the virtual world are beginning to make significant impacts in the physical world. Artificial intelligence just made it possible for Virgin Galactic to kick off the commercialization of spaceflight, but a human is still at the helm. AI is an enhancement to the human factor and guides complex decisions. People are influenced by the colorful imaginations of science fiction writers and have many misconceptions about AI, but at the end of the day it’s a powerful resource that surfaces valuable information to act upon. Human intelligence is fundamental and the ultimate companion for an AI; AI makes it possible for us to operate much more effectively. However, there's a distinction to make between AI and machine learning.
Machine learning uses man-made algorithms to analyze big data and provide insights dramatically faster than we can, which informs human intuition. The problem is that the amount of information that needs to be ingested, analyzed, reviewed, digested, and acted upon is tremendous. A team of 50 people wouldn't get through a quarter of it. The heavy lifting of the sheer quantity of things pushes everybody to say, 'okay, I need this done automatically'.
We, as people, have the ability to think out of the box: i.e. considering things that a seasoned security analyst would expect from somebody designing defenses against a variety of offenses. Machine learning saves time and empowers humans to focus on what’s important, add value, and automate tasks where it makes sense. AI is in the areas where the machine starts to reach conclusions that we couldn’t and starts offering directions or highlighting actions we didn’t see. The Intelligence behind the design starts coming from the artificial machine and not the coder behind it.
Not every organization has the wherewithal to roll their own security system and a well-piloted AI can help. Intelligent systems are necessary to notice what's significant to guard against cyberattacks from within a massive amount of events. The technology is headed in this direction, but today, machine learning is ultimately a bicycle for the brain (it can do in a split second what would normally take a security analyst a month), but it requires expert guidance to steer and navigate hazards.
A human designed algorithm loses relevancy once the vector of attack changes and whoever designed it didn't consider every possibility. It'll adapt to an extent, but it’s not actually fully cognitive machine intelligence until it learns how to adapt by itself and do things that are out of the box. It would be a misrepresentation to position AI as a complete substitute for people: Virgin Galactic still has a crew for that reason.
However, AI helps crews to navigate safely, which is essential for space tourism without the size and resources of NASA. Cybersecurity on Planet Earth isn't much different except for the fact that it still mostly resides within ML territory, except in specific areas. The confluence of human intelligence and AI makes it possible for experts to serve organizations that are less sophisticated in protecting their systems and networks against malefactors.
BN: Won't the bad guys adopt the technology too? Is there a risk of getting into an AI versus AI arms race?
NA: Philosophically speaking, yes, a hundred percent. That's true in every technology and every war where the force that had the slightly more advanced technology has had the upper hand. The battle between an attacker and the defenders is exactly the reason where the human factor comes into play and AI helps those defenders to focus and make decisions that optimize their time and skills.
What we're seeing today is basic technology that’s designed for very specific attacks. It's only in 0.1 percent of attacks that very sophisticated technology is being used. There are millions of attacks every day, so you'll see advanced techniques; whereas, nine million other attacks are happening that are just super rudimentary, garden variety ransomware attacks and viruses. The latter are the mass of the attacks, and they're also the mass of the damage. If you're a nuclear reactor, then somebody's going to do massive harm, but if you're an average SMB, then you're a lot more susceptible to those garden variety attacks that we call drive-bys.
Those machines aren't cutting edge and those attacks aren’t either. They're just the common things that have been learned over the past few years. However, with the forefront of attacks and premium ATPs (advanced persistent threats), it'll be a battle of wits between the advanced technology versus their technology. AI can free the defender to be more cognizant of what's going on and respond in kind.
BN: Why is the human element still essential?
NA: The human element still makes all the difference. Technology spreads very fast today, and the assumption is that if I have something and it's out there, others will adapt quickly. Even if you don't release it, once you start using it, people will learn, investigate, and reverse engineer the technology. I'm not talking about that premium cutting edge 1 percent, I'm talking about the 99 percent of technology used for offense and defense. That’s where the combined force of human experts and ML comes into play, and that foresight will guide downstream organizations toward better security outcomes. The reverse is also true with ML serving as an indispensable platform to get security work done. The way we work has changed, because our intelligence and creativity is being augmented by it.
If an incident is a false positive 99.9 percent of the time, statistically speaking, computers will start not paying attention to it because of the mass. That's why it's not really cognitive intelligence, as long as humans are feeding the machines. The way you write the code, the way you design the code, the way you build your approach to attacks, or your understanding of networking and IT has a lot of influence on the tool. All of that's purely the human factor, but even in the day-to-day heavy lifting, most of the tools don't do everything that's required. They'll alert you, but if you disregard them, then you're disregarded. Any platform, including those that utilize ML, has to be designed with users and incident response in mind.
That's the idea behind CYREBRO, for instance. Technology is super important, but it also has to flow into a consolidated place. Reporting and answering questions that are asked by a human is still vital, and possessing a sound technical skill set will be the most important thing for the next 10 to 15 years. The machine will tell you a lot of things that you have to do, or that they reached a conclusion, and then when you follow it up you’ll realize it's a false positive. Then when you stop listening to that security system it's the boy who cried wolf.
There has to be a human who's there with the bigger picture understanding. It's very difficult for a machine to take a risk analysis into account and say, 'You know something malicious is happening, but I'm waiting for this huge deal to come in. I'll wait and it's worth the risk.' A machine's not going to make those critical risk analysis decisions. Most small businesses that use security technology are also hesitant to let it automatically respond fully to everything. It won't last and they'll shut it down after a year because it's going to shut down machines, erase things, and it'll wake them up in the middle of the night. It may be tolerable once or twice, but not after 10-20 incidents. Expert guidance is the determinative factor: human hands model clay into art, but an artist needs a medium to do what they do best. Security analysts interact with ML in a similar manner.
Layers of AI are beginning to make those determinations, but people are still very much involved.
BN: What questions do businesses need to ask before adopting a new security solution?
NA: Obviously, there are many questions to consider. There's the business side of, 'do I need it?', and that's probably a question you need to ask when you buy anything. I don't know the percentages, but the amount of security technologies and solutions being bought and not utilized is staggering. How many organizations have bought SIEMS and EDRs and haven't been using those, but they're paying three years in? It's an immense number, and they simply don't know how to use them, which is a problem that affects IT teams everywhere.
Security teams have this 'set it and forget it' approach, specifically within SMBs,. They install new technologies and then find it challenging to be actively operating them and constantly looking to understand what they’re saying. You need to see that it's not running on certain machines and it's seeing problems that aren't necessarily problems: it just doesn't understand the configuration. The more tools you buy, the bigger this ocean of information that you need to sift through becomes.
The first question should be, 'can you handle another security system?' Because if you can't, there's no point in buying one. A security system is best served by a very large and talented team with a wide range of skilled analysts, or barring that, an AI platform that surfaces actionable events and security information. You should ask whether there’s a better alternative to it or a more reasonable way to mitigate your risk, because ironically, a lot of systems cost more to buy, implement, and operate than the potential damage that they're trying to protect against.
Focus on the basics before you buy the next toy with the caveat that, obviously, you might have a new problem and you need to buy a system for that. AI/ML platforms are the point of convergence to maximize and justify the other security investments you’ve made. It’s a much smarter strategy than using your budget to build more silos where vital alerts could sit until a post mortem analysis, and you'll make better use of your time by consolidating the view of your systems. ML is required to process big data and AI will work to produce the appropriate view for incident response and remediation.
BN: Are there some simple things companies can do to better protect themselves?
NA: Protection has many layers to it, the first being preparation. If you're well prepared as a company, i.e. you do backups and you test your ability to restore from those backups, that'll solve 90 percent of the ransomware problem. There's a lot of fundamental things in security that companies don't do. You can buy any system that you want, but if you haven't replaced the password in five years, then you have a different problem. That's policy and a system won't enforce it.
There's a reason why the fundamentals are what they are; they are always the heavy lift and make up 80 percent of security. For example, patch management: if every single one of your machines would be fully patched and up-to- date, which is almost impossible to do, you'd reduce your risk of a successful attack happening to you significantly. The ROI would be much better than buying another security system if you spent a little bit more time and money there.
We're creating our own forest, and it's becoming so convoluted that nobody can see anything inside. And even if the person in charge of it is a security person, there's no way that he/she has such a wide range of skills. They’re supposed to be a security expert, a network expert, or an expert in coding. However, there's no way that those [security] systems will talk about every one of those perimeters. At some point, you lose the thread of what that system is saying to you, meaning there will be more unresolved issues coming from the system than resolved. When something happens and an investigator comes in, they'll show you 20 places where theoretically, you messed up because it was reported, but you didn't do anything. ML and human experts can surface that knowledge for a better incident response than over reliance on systems running in their own silos. AI is what's beginning to make recommendations for us to follow. Use what you already have, more intelligently, with guidance.
That's it in a nutshell: any technology should empower you to focus on your greatest priorities.