'Shadow AI' could lead to a wave of insider threats

Insider threat

Poor data controls and the advent of new generative AI tools based on Large Language Models (LLMs) will lead to a spike in insider data breaches over the coming year, says cybersecurity company Imperva.

As LLM-powered chatbots have become more powerful, many organizations have implemented complete bans or restricted what data can be shared with them. However, since an overwhelming majority (82 percent) have no insider risk management strategy in place, they remain blind to instances of employees using generative AI to help them with tasks.

"Forbidding employees from using generative AI is futile," says Terry Ray, SVP, data security GTM and field CTO at Imperva. "We’ve seen this with so many other technologies -- people are inevitably able to find their way around such restrictions and so prohibitions just create an endless game of whack-a-mole for security teams, without keeping the enterprise meaningfully safer."

Previous research from Imperva on the biggest data breaches of the last five years found that 24 percent were due to human error (defined as the accidental or malicious use of credentials for fraud, theft, ransom or data loss). However, insider threats are consistently not prioritized by businesses, with 33 percent saying they don't see them as a significant threat.

"People don't need to have malicious intent to cause a data breach," continues Ray. "Most of the time, they are just trying to be more efficient in doing their jobs. But if companies are blind to LLMs accessing their backend code or sensitive data stores, it's just a matter of time before it blows up in their faces."

Imperva says it's crucial for organizations to discover and have visibility over every data repository in their environment so that important information stored in shadow databases isn't being forgotten or abused. Once they have an inventory, the next step is to classify every data asset according to type, sensitivity, and value to the organization. Businesses also need to implement data monitoring and analytics capabilities that can detect threats such as anomalous behavior, data exfiltration, privilege escalation, or suspicious account creation.

Image Credit: LeoWolfert/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.