Why agentic AI could make API threats a $100 billion-problem
APIs are the glue that holds together the modern enterprise. As digital transformation projects get the boardroom green light in ever greater numbers, so the infrastructure connecting software, data and experiences has expanded. Yet a potential storm is coming in 2025, as a new wave of agentic AI innovation takes hold in the enterprise. In fact, Gartner predicts over 30 percent of the increase in demand for APIs will come from AI and tools that use Large Language Models (LLMs)by 2026.
Unless organizations can mature their API security posture, next year could be the first time we see an LLM app security breach linked to APIs. And without improved API observability, it won’t be the last.
The problem with APIs
APIs have become an indispensable developer tool. Research reveals that they accounted for a staggering 71 percent of all web traffic in 2023, with an average of 1.5 billion API calls made per year to enterprise sites. Yet although they can improve the customer and employee experience, drive cost and other efficiencies, and support data-driven decision making, APIs also increase the attack surface.
In 2022, research revealed that API-related security incidents surged 40 percent annually in 2022, followed by a further 9 percent year-on-year (YoY) surge in 2023. Financial Services (20 percent), Business (17 percent), and Travel (11 percent) accounted for the largest share of attacks last year. Threat actors know that, given the importance of APIs to digital operations, they can disrupt companies significantly by targeting them. They can also hijack accounts in order to impersonate legitimate users, and use insecure APIs as a pathway to reach enterprise data stores.
The good news for threat actors is that there are plenty of ways to achieve these goals, as a cursory look at the OWASP Top 10 will reveal. The most common vulnerabilities range from Server-Side Request Forgery (SSRF) flaws and broken authentication to security misconfiguration. A separate study records business logic abuse as the top API attack type in 2023, accounting for 27 percent. This typically occurs when malicious actors use automation to exploit the intended functionality of an API for their own ends, such as data exfiltration or disrupting a mission-critical application.
Automated attacks (19 percent) are listed in second place. Malicious bots have democratized the ability to launch API attacks to a large range of cyber-criminals, lowering cost and barriers to entry. APIs are designed to interact with automated software applications to carry out their intended tasks, so bad bots take advantage by mimicking legitimate behavior to do their worst.
What will AI agents do?
By connecting data, apps and functionality, APIs will play an outsized role in the evolution of AI. While the past two years has seen ChatGPT and similar “chatbot” or assistant-style applications dominate the tech landscape, agentic AI will change the game again. Following an initial user prompt, these LLM-based programs are able to function autonomously, reaching out to third-party tools and datasets to make their own decisions and accomplish the tasks set for them.
According to Gartner: “By 2028, 33 percent of enterprise software applications will include agentic AI, up from less than 1 percent in 2024, enabling 15 percent of day-to-day work decisions to be made autonomously.” However, this kind of connectivity will require plenty of APIs. So as the agentic wave grows in speed and size, there’s a risk that enterprise API attack surfaces will also increase.
The impact of API-related AI threats
These risks could take many forms, depending on what type of agentic AI systems are being targeted. Operational disruption and data theft are perhaps the most obvious. The former may have a significant impact on enterprises that plug AI into key business processes. It’s no exaggeration to anticipate at least one major LLM application security breach in 2025 using APIs as a threat vector.
According to one study, the average annual global cost of API insecurity is already $35-87bn. As agentic AI begins to take hold, the figure could easily spiral to $100bn or more. Large enterprises will be most exposed, given their extensive use of APIs and the large volumes of sensitive data they hold. It’s estimated that API-related incidents already account for up to 18 percent of all cyber incidents in this group. However, even mid-range companies could suffer given that API incidents currently account for 8-12 percent of total cyber-related events.
The way forward
How should organizations respond? First, by improving visibility into their API environment. It’s estimated that there are 29 so-called “shadow APIs” per enterprise account. You can’t protect what you can’t see, so continuous discovery, classification and inventorying of all APIs, endpoints, parameters and payloads is essential. Next, perform risk assessments of vulnerable APIs with the end goal of identifying and protecting sensitive and high-risk APIs. Robust monitoring across all endpoints will also help to flag suspicious behavior early on.
Finally, put in place layered defenses including web app firewalls (WAFs), API protection, distributed denial of service (DDoS) prevention, and protection from bad bot traffic. That will help to provide the defense-in-depth organizations need to manage risk across an expansive attack surface. This won’t just minimize possible breaches. It could help to future-proof the organization against evolving regulatory demands. Most importantly, it will help them harness the astonishing power of agentic AI without inviting untenable levels of business risk.
Image credit: [email protected]/depositphotos.com
Tim Ayling is VP EMEA at Imperva, a Thales Company.