GenAI adoption surges amid concerns about security

Although enterprises are adopting GenAI in a big way, only five percent of the 1,000 cybersecurity experts responding to a new survey have confidence in the security measures protecting their GenAI applications even as 90 percent are actively using or exploring its use.

The research from Lakera shows attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain unauthorized access, steal confidential data and take unauthorized actions.

Gandalf, an AI educational game created by Lakera, invites players to try to trick AI into revealing a password. It has attracted more than one million players including cybersecurity experts attempting to breach its defenses. Remarkably, 200,000 of these players have successfully completed seven levels of the game, demonstrating their ability to manipulate GenAI models into taking unintended actions.

"With just a few well-crafted words, even a novice can manipulate AI systems, leading to unintended actions and data breaches," says David Haber, co-founder and CEO at Lakera. "As businesses increasingly rely on GenAI to accelerate innovation and manage sensitive tasks, they unknowingly expose themselves to new vulnerabilities that traditional cybersecurity measures don't address. The combination of high adoption and low preparedness may not be that surprising in an emerging area, but the stakes have never been higher."

Among other findings of the research, LLM reliability and accuracy is seen as the number one barrier to adoption. 35 percent of respondents are fearful of LLM reliability and accuracy, while 34 percent are concerned with data privacy and security. The lack of skilled personnel accounts for 28 percent of the concerns.

Despite these concerns 45 percent of respondents are exploring GenAI use cases, 42 percent are actively using and implementing GenAI and just nine percent have no current plans to adopt LLMs. Only 22 percent of respondents have adopted AI-specific threat modeling to prepare for GenAI specific threats.

You can get the full report from the Lakera site.

Image credit: sdecoret/depositphotos.com

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.