Humans write better phishing emails than AI
There's been a fair bit of hype recently surrounding the potential for ChatGPT and similar tools to be used for creating phishing campaigns, eliminating the typos and other errors that are the giveaways of a scam.
However, new research from Hoxhunt suggests that AI might not be quite so good at going phishing after all.
The company has analyzed more than 53,000 email users in over 100 countries to compare the win-rate on simulated phishing attacks created by human social engineers and those created by AI large language models. The study reveals that professional red teamers induced a 4.2 percent click rate, compared to a 2.9 percent click rate by ChatGPT in the population sample of email users.
Humans remain clearly better at hoodwinking other humans, outperforming AI by 69 percent. The study also shows that users with more experience in a security awareness and behavior change program are less likely to fall for phishing attacks by both human and AI-generated emails with failure rates dropping from over 14 percent with less trained users to between two and four percent with experienced users.
- How ChatGPT could become a hacker's friend
- Scammers turn to AI to improve their campaigns
- How emerging technologies are changing the security landscape [Q&A]
"Good security awareness, phishing, and behavior change training works," says Pyry Åvist, co-founder and CTO of Hoxhunt. "Having training in place that is dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins."
The full report is available on the Hoxhunt site.