Insider threats become more effective thanks to AI


Artificial intelligence is making insider threats more effective according to a new report which also shows that 53 percent of respondents have seen a measurable increase in insider incidents in the past year.
The survey, of over 1,000 cybersecurity professionals, from Exabeam finds 64 percent of respondents now view insiders, whether malicious or compromised, as a greater risk than external actors. Generative AI is a major driver of this, making attacks faster, stealthier, and more difficult to detect.
“Insiders aren’t just people anymore,” says Steve Wilson, chief AI and product officer at Exabeam. “They’re AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed. The question isn’t just who has access -- it’s whether you can spot when that access is being abused.”
The majority (54 percent) expect the growth in insider incidents to continue. Government organizations are bracing for the steepest rise (73 percent), followed by manufacturing (60 percent) and healthcare (53 percent), fueled by expanding access to sensitive systems and data.
AI has become a force multiplier for insider threats, enabling actors to operate with unprecedented efficiency and subtlety. Two of the top three current insider threat vectors are now AI-related, with AI-enhanced phishing and social engineering emerging as the most concerning tactics (27 percent). These attacks can adapt in real time, mimic legitimate communications, and exploit trust at a scale and speed human adversaries can’t match.
Unauthorized GenAI use adds to the challenge, creating a dual-risk scenario where the same tools meant to boost productivity can be repurposed for malicious activity. More than three-quarters of organizations (76 percent) report some level of unapproved usage, with those in technology (40 percent), financial services (32 percent), and government (38 percent) experiencing the highest rates. Regional variations are telling, in the Middle East, unauthorized GenAI is the top insider concern (31 percent), reflecting both rapid AI adoption and the governance gaps that can follow.
When it comes to addressing the problem while 88 percent of organizations say they have insider threat programs, most lack the behavioral analytics needed to catch abnormal activity early. Just 44 percent use user and entity behavior analytics (UEBA), the foundational capability for insider threat detection. 97 percent of organizations report using some form of AI in their insider threat tooling, yet governance and operational readiness lag behind.
“AI has added a layer of speed and subtlety to insider activity that traditional defenses weren’t built to detect,” says Kevin Kirkwood, CISO at Exabeam. “Security teams are deploying AI to detect these evolving threats, but without strong governance or clear oversight, it’s a race they’re struggling to win. This paradigm shift requires a fundamentally new approach to insider threat defense.”
You can get the full report from the Exabeam site.
Image credit: artursz/depositphotos.com