Anticipating tomorrow's threats: AI, evolving vulnerabilities, and the 'new normal'

Artificial-Intelligence-threat

Modern cybersecurity leaders are expected to balance an almost comical number of responsibilities. Threat intelligence, vulnerability management, asset tracking, identity management, budgeting, third-party risk -- and that’s just what the company is willing to put in the job description.

To be a cybersecurity expert is to spend your entire career deepening your well of knowledge in one or a few domains. To be a cybersecurity leader, on the other hand, is to spend your career attempting to drink an ocean through a straw. What makes this moment in cybersecurity so interesting is that generative artificial intelligence (AI) brought a fundamental change to both the ocean and the straw.

Tools are integrating AI to improve enrichment, saving analysts substantial time by rendering various data sources into simple, natural language explanations. Engineers and operators use the various co-pilots and their competitors to save research time and rapidly create boilerplate code. Managers are deputizing chat tools to create reports and dashboards. Meanwhile, attackers are leveraging AI tools to perform astonishing, high-profile attacks, carting away millions of dollars using technology that was the sole purview of mediocre sci-fi only a decade ago.

Unmasking the AI Menace: Social Engineering Redux

Picture this: An email pops into your inbox from your boss requesting that you remit payment on a massive, secret invoice. You obviously feel deeply uncertain and follow your latest phishing training by video-calling your manager. On that call, your manager and a crowd of other senior leaders tell you it’s a legitimate request and that a major partnership will collapse if it isn’t fulfilled immediately. You pay as fast as your fingers can fly before logging out, secure in the knowledge that you just saved the day. The next morning, you get a call from your company’s lawyer informing you that you just paid millions of dollars to a hacker.

Welcome to the current state of AI-fueled social engineering where the lines between truth and deception blur effortlessly and following the old rules can result in new catastrophes.

This story isn’t as far-fetched as it may sound. Last year, a finance worker at a multinational firm in Hong Kong paid out more than $25 million U.S. dollars under almost exactly those circumstances. While many of the fears around AI are dramatically overblown, this is one area where generative AI represents a sea-change in attacker capacity.

While solving this problem is a complex task, one useful procedural shift is establishing passphrases or other verbal challenges that employees must use to verify identity during sensitive calls. For example, asking for three points of identifying information. This sort of out-of-band control can be easily employed by almost anyone quickly. Similar techniques are popular in password reset processes and have also been tremendously effective.

Now that we’ve talked about a terribly realistic nightmare scenario, let’s dispel a couple of myths.

Demystifying AI Hacks: Separating Fact from Fiction

Contrary to Hollywood portrayals, most threat actors aren’t employing super-advanced AI to breach systems and steal data with bespoke zero-day attacks and custom Command and Control (C2) protocols. It isn’t that such technology is impossible but that the necessary resources to build it are absurd and available only to nation-states and a tiny handful of corporate powers, a group I often collectively reference as “The Big Kids.” Put simply, if I have what I need to create a custom AI just for hacking, then I already have people capable of doing everything that AI can do and much, much more.

This isn’t to say that AI has no utility to attackers outside of social engineering. By integrating AI into existing hacking techniques, attackers lower the barrier to entry, making vulnerabilities once deemed obscure now fair game. The democratization of hacking tools forces us to look at once-impotent attackers as suddenly serious threats. Everything and anything an attacker can Google becomes a potential target and now they have a helpful chatbot to explain how to pursue that target, step-by-step.

Interestingly, from the perspective of a security analyst or incident responder, this sudden wave of attacks against what once were mid-tier vulnerabilities is essentially a volume problem (as opposed to a sophistication problem). This is interesting because, as I alluded to earlier, AI is uniquely useful for solving volume problems. In other words, this is a case where the poison is its own antidote. AI-powered solutions can analyze vast amounts of network traffic in real time, identifying potential threats before they escalate and providing critical context to improve and speed investigations. Whether it’s using AI to create queries from natural language, develop dashboards or find insights in highly structured technical data, these tools reduce analyst fatigue, decrease Mean-Time-to-Resolution and improve quality standardization without additional work, each of which is a key component to volume-based solutions.

The Myth of AI Data Poisoning: Think Horses, Not Zebras

Within the buzz surrounding AI, another myth persists: The notion that attackers specifically target AI training data for poisoning. Sensationalist headlines aside, this approach remains largely impractical for the average cybercriminal.

While nation-state actors may harbor interest in manipulating AI models, the lack of clear incentives will deter most attackers, as will the immense technical and logistical difficulty of performing such an attack. That isn’t to say that training data is free of risk. Anyone who has spent the last decade responding to ransomware can tell you that vulnerable data is a valuable target. The more important the data, the more likely it is to be targeted, and training data is often a key value proposition for multibillion-dollar companies. While we should be attentive to the possibility of poisoning, it is a significantly less pressing concern than theft or access denial.

Why is this distinction worthwhile? First, solving data poisoning is primarily an algorithmic challenge. The solution lies in developing AI tools that are more resistant to bad inputs. On the other hand, solving traditional extortion tactics like theft and denial is primarily a data protection challenge. The solution then lies in implementing best practices, developing effective controls, and rapidly resolving issues as they emerge. Second, data poisoning envisions training data risk as being the result of its outputs, whereas theft and denial envision training data risk as being the result of its storage. The former requires a team of PhDs to understand and solve, the latter requires a few mid-tier security architects and a good project manager.

Security Tomorrow: What We Can Do Today

With debunking myths and keeping current realities in mind, organizations must take practical, actionable next steps to secure their people, processes and technology. Bolstering employee defenses against social engineering is priority number one and organizations should emphasize continuous security awareness training as a key component of that process.

Secondly, AI risk is fundamentally a form of third-party risk. The challenge of managing that risk then remains to understand the problem statements, technical theses, visible and hidden risks, and then incorporating all of that into planning and decision-making. Businesses must evaluate AI tools based on rational bases to determine product fit based on dispassionate analysis, not on hype.

No risk is unacceptable so long as it is appropriately addressed. However, unaddressed risk is never acceptable.

The fusion of AI and sophisticated threat techniques presents both challenges and opportunities. Challenges in that it forces us to understand old problems in new ways and new problems in old ones, and opportunities in that we have access to a greater collection of information and guidance than ever before. Our responsibility is not to drink the ocean, but to know what’s swimming around in there.

Image Credit: Wayne Williams

Joseph Perry is Manager, Cyber Fusion Center at MorganFranklin Consulting.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.