Navigating the hidden dangers in agentic AI systems [Q&A]


According to Gartner 33 percent of enterprise applications are expected to incorporate agentic AI by 2028, but are their security teams equipped with the latest training and technology to protect this new attack surface?
We spoke with Ante Gojsalić, CTO and co-founder at SplxAI to uncover the hidden dangers in agentic AI systems and what enterprises can do to stay ahead of the malicious looking to exploit them.
BN: What security threats arise from multimodal AI systems? How do the potential risks compare with those in traditional AI systems?
AG: As the reliance on AI agents increases, many new risks emerge -- including the exposure of internal business information and sensitive data, as well as the potential for AI system manipulation which can result in harmful behavior.
Traditional AI systems mainly rely on text inputs and outputs, focusing on specific, clearly defined tasks. Because of this narrow scope, their attack surface tends to be limited. On the other hand, multimodal AI systems process data from a variety of sources and are built to function independently, carrying out intricate tasks with very little human intervention. For example, while traditional AI may handle repetitive tasks, agentic AI takes it further by automating intricate processes. And while this helps employees to concentrate on more strategic, high-impact work, this broader range of capabilities greatly increases the attack surface, leaving them more exposed to potential exploitation by cybercriminals.
BN: How is the rush to bring AI agents online riskier than previous rushes to adopt other technologies?
AG: We’ve seen headlines everywhere about the global AI race. Teams are eager to deploy their AI agents and get them into the market quickly. Unfortunately, market dominance within this space is causing safety measures to be sidelined and very much so an afterthought.
Without a clear understanding of how AI agents, tools, and data flows interact, it's impossible to perform accurate security testing or guarantee compliance, leaving blind spots in these frameworks that cybercriminals are getting better at exploiting via multi-chain prompt injection attacks. This type of attack allows malicious actors to target multiple AI agents working together as part of an interconnected system to bypass existing security measures. For example, one language model might simplify a complex query, while another is searching the internet for relevant information and the third is integrating data from internal knowledge bases. One compromised agent is like a domino effect and can influence the others.
BN: What challenges will CISOs run into as they navigate implementing agentic AI systems? What should they focus on, both short-term and long-term?
AG: Agentic systems have complex workflows and often interact with multiple tools, making transparency and security assessment challenging.
The main security challenges in agentic AI include limited visibility into agentic AI workflows, uncertainties about the risks posed by interconnected large language models (LLMs), compliance requirements, and the constraints of black box testing. Tackling these issues is essential to ensure the security and dependability of increasingly complex AI systems.
CISOs tend to take a holistic and strategic approach to managing risk. A key challenge they face is navigating the wide array of offerings from both traditional and next-generation security vendors. This challenge becomes even more pronounced when securing agentic AI systems, which require a combination of needs due to their complexity.
In the short term, CISOs should focus on securing agentic systems that are heading into production and prioritize tools like automated AI red teaming and AI firewalls.
Looking ahead, new compliance frameworks are expected to emerge, driving greater transparency in agentic AI systems. This will lead to broader adoptions of tools such as static code analysis, log monitoring and centralized platforms for managing AI vulnerabilities.
BN: What key pieces of advice would you offer to organizations looking to secure their agentic AI systems in today’s environment?
AG: Security and safety must be top priorities when deploying AI agents – and implementing these measures early in the development process is important. These crucial factors can’t be treated as an afterthought, even if you believe your competitors are gaining an advantage with their AI implementations.
Validating inputs and outputs with a particular focus on minimizing potential exploit pathways and hardening system prompts are non-negotiables. But security teams will need training and help to do this.
SplxAI’s Agentic Radar maps the dependencies in agentic AI workflows and components using static code analysis to expose missing security measures within them. The tool is designed to help security teams and AI engineers understand how their AI agents interact with tools, external components, and each other. Visibility into these workflows enables security teams to proactively secure their agentic AI systems before they can be exploited. With its detailed analysis of AI decision-making paths, Agentic Radar effectively maps out vulnerabilities in AI-powered workflows and aligns findings with security frameworks like OWASP Top 10 for LLM Applications. This ensures that AI applications meet established security standards, enhancing overall system security.
BN: What can you say about the future of agentic AI?
AG: As AI progresses, agentic AI will become more seamlessly integrated with existing systems, empowering advanced autonomous agents to address increasingly complex challenges. The future of agentic AI holds the promise of boosting productivity, while offering tailored user experiences, and enabling smarter decision-making across various fields, including healthcare and supply chain logistics.
To unlock this potential in a responsible way, it's crucial that agentic AI is developed with transparency and security at its core from the start.
Image Credit: Twoapril Studio/Dreamstime.com