AI for the good guys: Practical lessons for AI and cyber risk

AI

Threat actors are early adopters. Cyber defense is brimming with uncertainties, but one dynamic you can be confident about is that threat actors will leverage everything available to exploit a target. In 2023, this means the rise of artificial intelligence-enabled attacks, from AI-generated social engineering scripts to powerful automation designed to find and exploit vulnerabilities and spread laterally through systems and networks.

Security teams, therefore need to be prepared to meet the challenge of cloud scale threats on both a technical level and an organizational level. It requires anticipating threats that exist beyond technical vulnerabilities, including, for example, social engineering and DDoS. This is part of the challenge of modern cyber security -- the attack surface comprises not just the entirety of IT infrastructure, its endpoints, and all the data it uses and stores, but also its users. It is too large to be effectively managed manually.

According to research from Rezilion, two-thirds of practitioners surveyed reported a backlog of more than 100,000 vulnerabilities. Though most of these won’t have a real impact, many will. According to a report from Vulcan Cyber, more than 75 percent of respondents were impacted by an IT security vulnerability. As 2017’s WannaCry ransomware attack revealed, threat actors have long had the ability leverage AI to identify and exploit these weaknesses to spread through hundreds of thousands of systems -- and their capacity has only grown in the intervening years.

Research from Cyberark shows that AI-enabled threats are basically a given among security professionals now -- more than 90 percent of those surveyed expect these threats to affect their organization in 2023. The question becomes: What is the real risk organizations face from malicious AI, how do they triage, and how can blue teams harness AI effectively for their own ends?

ChatGPT Can’t Hack You

ChatGPT has driven a great deal of recent headlines. It’s not undeserved -- it delivers extraordinary functionality across a broad range of tasks, tasks which it sometimes completes with such facility that it delivers a product nearly indistinguishable from that of a competent human. But writing code is not one of those tasks.

ChatGPT can deliver existing and even functional code, but it cannot produce code with a notable level of sophistication, which requires a degree of creativity and imagination that it does not possess. If you want an efficient way to scrape the web and cut-paste relevant code blocks that function as a cohesive program in response to a particular prompt, generative AI will give that to you in the same way a student programmer will (perhaps faster). However, it can’t match the talent of a human programmer on either side of the equation -- security team or threat actor. This will change over time, especially as malicious actors develop clones of generative AI engines with safeguards removed. We have seen the potential for AI package hallucinations as well, where threat actors seed malicious materials upstream so the generative AI itself distributes them to unsuspecting users. However, the danger of generative AI that can deliver and execute sophisticated attack paths more reliably than humans is firmly on the horizon.

The World’s Largest Attack Surface

The most significant vulnerability to every organization around the world, universally, is their users. More than 50 percent of personal devices were exposed to a mobile phishing attack every quarter in 2022, according to Lookout. And even though ChatGPT can’t write code like a human, it can write emails better than some.

An accidental safeguard against phishing attacks -- an unplanned last line of defense -- is the grammar of the phishing note itself. Sent in volume by threat actors , perhaps in their non-native language, phishing lures that might effectively spoof a domain or sender often engender suspicion based on their ungrammatical or non-normative language usage. ChatGPT erases this advantageous imbalance. One of its most effective use cases is in drafting logical and grammatical plain-language messages. These work within a standard framework of mostly inflexible rules and do not require the technical ingenuity that effective code does. It simply needs to deliver a message with no direct technical function that meets normative standards of capitalization, punctuation, and word choice, all of which are abundantly available in its dataset in a way that novel approaches to accessing secure systems are not.

It’s important to note that ChatGPT itself has some safeguards when it comes to this kind of activity. If you ask it directly to draft a phishing lure, it will refuse. Unfortunately, it is also trivially easy to circumvent this protection by reframing the prompt.

According to industry sources, more than 3 billion phishing emails were sent each day in 2021, a volume that has only grown in the intervening years. At this scale, even a marginal increase in the effectiveness of phishing lures can have a tremendous impact.

AI for the Good Guys

In order to take full advantage of cloud technologies that are now competitive imperatives, organizations need to invest in mitigating cloud-scale threats, a prospect that might seem more daunting as the years go by. But IBM’s research shows that 93 percent of organizations surveyed are using or considering the use of AI in managing their cyber security, indicating that they do recognize the threat and are taking steps to bolster defenses.

When we talk about cloud-scale threats, they take the same form as cloud-scale advantages -- a tremendous volume of data. It’s advantageous when that data can be combed for profitable insights and patterns. It’s a disadvantage when it hides threat vectors or leaves data exposed.

Modern cyber security is essentially about managing data that creates cyber risk. Organizations implement suites of scanners across multiple platforms (and attack surfaces). These are very effective at identifying vulnerabilities, but result in masses of undifferentiated, duplicative signals and data points that overwhelm very human security teams. The result is the inability to scale to address growing cyber risk.

Compounding the issue is the fact that that the technical severity of each vulnerability does not always correspond to the danger it represents to a specific organization -- a severe vulnerability in non-essential system with no lateral access to business-critical operations doesn’t present a problem like a technically less-severe threat that grants unauthorized admin access over accounts payable. This is where AI plays an essential role in defense: by augmenting the capacity of the security team to efficiently audit massive volumes of data so that they can deploy resources to remediate threats that represent a genuine threat to the business itself. This means deduplicating data, reducing noise, introducing deeper context for vulnerability prioritization, and integrating remediation as a consistent part of the vulnerability management lifecycle.

Image Credit: Mopic / Shutterstock

Roy Horev is CTO and cofounder of Vulcan Cyber.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.