AI use grows in the workplace but organizations struggle to secure the human element

A new report reveals that security leaders are facing increased pressure in managing behavioral cybersecurity risk as the workforce transforms to include AI.
The study from KnowBe4, of 700 cybersecurity leaders and 3,500 employees, finds incidents relating to the human element surged by 90 percent in the past year. Examples of ways these incidents can occur include social engineering attacks such as phishing or business email compromise (BEC), risky or malicious behavior and human error.
It’s clear from the findings that humans are often the weak link in the chain. 93 percent of surveyed leaders reported incidents caused by cybercriminals exploiting employees. The report also shows a 57 percent increase in email-related incidents, showing that email remains the primary battleground. 64 percent of organizations fell victim to external attacks that exploited employees through email.
Human error persists as a critical vulnerability too, as 90 percent of organizations experience incidents caused by employee mistakes. Added to which malicious insiders continue to threaten from within, accounting for incidents at 36 percent of organisations.
No surprise then that 97 percent of cybersecurity leaders feel the need for increased budget allocations to bolster the security of the human element.
At the same time while AI is driving productivity it’s also a security concern. AI applications saw a 43 percent increase in security incidents over the past 12 months, the second-largest increase across all channels.
Despite 98 percent taking steps to address AI-related risks, cybersecurity leaders rank AI-powered threats as their top security risk, with 45 percent citing constantly evolving AI threats as their greatest challenge when tackling behavioural risk.
While 98 percent of organizations have taken steps to address AI-related cybersecurity risks, 56 percent of employees are unhappy with their company's approach to AI tools, which risk driving them toward unsanctioned platforms and creating ‘shadow AI’ risks.
"The productivity gains from AI are too great to ignore, so the future of work requires seamless collaboration between humans and AI," says Javvad Malik, lead CISO advisor at KnowBe4. "Employees and AI agents will need to work in harmony, supported by a security programme that proactively manages the risk of both. Human risk management must evolve to cover the AI layer before critical business activity migrates onto unmonitored, high-risk platforms."
The full report is available on the KnowBe4 site.
Do you believe humans are a bigger risk than AI? Let us know in the comments.
Image credit: Morganka/Dreamstime.com
