The artificial intelligence tug-of-war in the world of cybersecurity [Q&A]

pixel padlock

It's a rare cybersecurity product these days that doesn't claim to have some form of AI capability. But exactly what benefits does AI deliver? And is there a risk of an arms race as threat actors also turn to the technology?

We spoke to Corey Nachreiner, CSO at WatchGuard Technologies, to find out more about the role of AI in cybersecurity.

BN: What role does AI play in cybersecurity? What are some key use cases?

CN: Artificial intelligence plays an increasingly important role in cybersecurity. A recent Pulse Survey shows that 68 percent of senior executives say they are using cybersecurity tools that use AI technologies, and among those who are not yet using AI, 67 percent are willing to consider it. The survey also discusses the main areas of cybersecurity that benefit from AI. These include network security, identity and access management, behavioral analytics (accidental/malicious internal threat detection), automated response and endpoint detection and response. For example, leveraging AI can help reduce zero-day malware by automating the discovery of threats without the need to wait for human driven signatures. Additionally, security teams can rely on machine learning to review vast quantities of data in order to detect malicious behavior more quickly.

BN What benefits do security teams get from using AI?

CN: AI offers security teams many benefits. These include increased threat detection speed, predictive capabilities, error reduction, behavioral analytics and more. AI enables a system to process and interpret information more quickly and accurately and, in turn, use and adapt that knowledge. It has substantially improved information management processes and allowed companies to gain time -- a critical component of the threat detection and remediation process. Additionally, today's ML/AI is good at automating basic procedural security tasks. For instance, AI can process noisy security alerts, removing the obvious false positives, or events that may not be serious, and leave only the important things that humans need to validate. Put a different way, it helps separate the wheat from the chaff in the deluge of security alerts, so human analyst can focus on what’s important.

BN: How are threat actors utilizing AI to step up attacks and evade detection?

CN: Threat actors are using AI in many ways. For example, attackers use it to automate the discovery and learning about targets. When ML is applied to social networks, it can help identify the most prolific users with the most reach, etc., and it can then help automate learning what those individual users care about. This type of automated investigation of public profiles can allow attackers to use AI to craft messages that will more likely appeal to that target. In short, AI can automate the research into human targets that was traditionally done manually, enabling hackers to quickly collect enough information about the targets to deliver very specific phishing messages.

In fact, recent research on this subject presented at Black Hat demonstrated that a typical, widespread phishing attempt will see about a five percent success rate. Layer on machine learning that uses knowledge about the targets to make the phishing attempts more accurate and believable, and hackers will see about a 30 percent success rate. This is nearly as much as they see in a highly specified, targeted spear-phishing attempt.

BN: What does the AI tug-of-war between attackers and defenders look like?

CN: With AI/ML being used more and more by both the good guys and bad guys, it’s become a true cat and mouse game. As quickly as a defender finds a flaw, an attacker exploits it. And with ML, this happens at line speed. But there is work being done to address this. For example, at DEFCON 24 DARPA created the Cyber Grand Challenge, which pitted machine versus machine in order to develop automatic defense systems that can discover, prove, and correct software flaws in real-time. But this tug-of-war will likely continue as both attackers and defenders become more and more sophisticated and able to leverage the power of AI, and it will become a machine-to-machine battle with humans just contributing to help form the best models.

BN: What can organizations do to stay one step ahead of attackers?

CN: Normal users can't really help to minimize AI/ML used in an attack, they can only try to avoid the attacks themselves, which may still use social lures. The first place to start for companies is security awareness training. Teach employees how to recognize phishing and spear-phishing attempts. Understanding the problem is a big step in addressing it. Additionally, employ threat intelligence that sinkholes bad links, so they get quarantined and don't cause harm, even if they are clicked on. While this tug-of-war will likely go on indefinitely, we can continue to take steps to help the good side gain a little more muscle.

BN: What will the future of AI look like in the cybersecurity landscape?

CN: I would refer to the Cyber Grand Challenge for that future outlook. In that challenge, the attacks and defense were entirely machine to machine. Humans were not involved in the step-by-step tactics. The human security experts only played a role in helping design the models and strategies the AIs used to stay ahead once the fight started. In the future, I see machines with abilities to adjust their defense on the fly to fend of new attacks, but cybersecurity will involve a bigger data science component for defenders to help build AI/ML models so that their AIs stay one step ahead of those of the attackers.

Photo credit: jijomathaidesigners/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.