Agentic AI might take years to transform security, but cyber defenders must prepare now

For the past two years, the world has been swept up in a rising tide of GenAI hype. The technology has evolved from a data science curiosity to a pervasive part of our everyday lives. ChatGPT alone has over 300 million weekly users worldwide -- and people use Large Language Models (LLMs) every day to generate text, images, music and more.

Despite GenAI’s widespread success, difficulty in developing robust applications that make use of trustworthy AI systems has proven difficult. This is most clear when noting the delta between consumer-facing GenAI applications relative to B2B integration of GenAI. But, with agentic AI this is about to change.

Many hands make light work

Agentic approaches make use of LLM “agents” that have been prompted, fine-tuned, and given access to only the necessary tools required to achieve a well-defined goal, rather than being tasked with a complete end-to-end mission.

Imagine you want to use an LLM to write code. A standard approach would be to prompt the LLM with the instruction to act as an expert coder, and ask it to develop code for a described problem. Then, the user has to cross their fingers and hope it works out without too many inaccuracies.

Conversely, agentic AI models break high-level objectives into well-defined sub-tasks, equipping individual agents with the ability to execute on each of these sub-goals. One agent may orchestrate, while another writes the code, another writes unit tests, and yet another debugs any possible errors. The end result is likely to be much more robust, accurate, and repeatable -- allowing agentic AI to take on more mission-critical tasks.

But what does it mean for the cybersecurity landscape -- where attackers are already using GenAI to supercharge attacks, and the slightest mistake could lead to a headline-grabbing data breach?

Will agentic AI cause a security shift?

For the next few months and beyond, attackers will continue to focus on optimizing their use of AI. This will likely mean sticking to more established GenAI approaches to research potential victims, carry out spear phishing attacks at scale, and to save time on tedious tasks. Rote actions from coding to answering straight-forward security questions will be offloaded to LLMs, where possible.

As for agentic AI, for now, the cutting-edge use cases will start happening in the background. Threat actors are already beginning to experiment, using LLMs to deploy their own malicious AI agents that could carry out end-to-end autonomous attacks. These threat actors are testing how far agents can execute complete attacks without requiring human intervention. But, we are still a few years away from seeing these types of agents being reliably deployed and trusted to carry out actual attacks in the wild.

While agentic AI capabilities would be hugely profitable for cybercriminals in terms of time and cost of attacking at scale, autonomous agents of this sort would be inaccurate to depend on without human assistance.

However, it’s only a matter of time until threat actors create GenAI agents for various aspects of an attack -- from research and reconnaissance, flagging and collecting sensitive data, to autonomously exfiltrating that data without the need for human guidance. Once this is achieved, the volume and variety of stealthy attacks reaching organizations will increase tenfold.

More importantly, without evidence of a malicious human on the other end, the industry will need to transform how it spots the signs of an attack. This could be a drastic shift for the industry, and will require even more precise detection and response strategies.

Defenders must act on agentic AI, not react

While cybercriminals are likely years away from realizing the potential of agentic AI in their attacks, cyber defenders can’t afford to sit on their hands. Once again, the cybersecurity industry is caught up in an AI arms race and must act now to implement agentic AI approaches into their security strategies.

By using agentic AI to speed up threat analysis, incident response time, and assist security teams in their decision making, organizations can employ their own army of AI agents and prepare to identify and remediate attacks more effectively. AI agents may even be capable of autonomous threat investigation in the future.

In the threat detection landscape, this will require organizations to implement quality predictive AI models that can filter the various sources of high-dimensional data into actionable signal, either by way of detections or context, in order to seed the investigative process.

Informed agents

Effective application of GenAI in the security industry will require intentional and thoughtful means of building robust and trustworthy solutions, and as such, this will increasingly mean using agentic approaches.

In order to build a competent and effective roster of AI agents, organizations will need to lay the right foundations. Without access to high quality security and AI knowledge, automating SOC operations may ultimately end up creating additional work for teams rather than relieving the pressure.

Image Credit: Twoapril Studio / Dreamstime.com

Sohrob Kazerounian, Distinguished AI researcher at Vectra AI.

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.