Over 80 percent of hackers believe the AI threat landscape is moving too fast to secure

A new report from Bugcrowd finds 82 percent of ethical hackers and researchers on the platform believe that the AI threat landscape is evolving too fast to adequately secure.

Based on responses from 1,300 users of the platform, the report also finds that 71 percent say AI adds value to hacking, compared to only 21 percent in 2023. In addition, hackers are increasingly using generative AI solutions, with 77 percent now reporting the adoption of such tools -- a 13 percent increase from 2023.

Despite this though the survey reveals that only 22 percent of hackers believe that AI technologies outperform human hackers and only 30 percent believe that AI can replicate human creativity.

"There is no denying that AI remains a strong force within the hacking community, changing the very strategies hackers are using to find and report vulnerabilities," says Dave Gerry, CEO of Bugcrowd. "Bugcrowd is in a privileged position to work with a creative, forward-thinking community that thrives on the cutting edge of cybersecurity. Celebrating hackers is part of the core of what we do at Bugcrowd, and these insights can help businesses understand the unique value this community brings to fighting against today's AI-driven cyberattacks."

Among other findings of the research, 93 percent of hackers agree that companies using AI tools have created a new attack vector. 86 percent believe that AI has fundamentally changed their approach to hacking, and 74 percent agree that AI has made hacking more accessible, opening the door for newcomers to join the fold. However, 73 percent of hackers still report being confident in their ability to uncover vulnerabilities in AI-powered apps.

Patrick Harr, CEO at SlashNext Email Security+, says, "This report reinforces what we have stated this past year -- AI is game-changing for business and organizations, however, it is also a productivity breakthrough for hackers to attack at scale at near zero cost. AI assisted attacks are now commonplace in BEC, phishing and social engineering. We anticipate that it will become more prevalent in Malware and Large Language Model (LLM) poisoning and model injection. We are at the dawn of next level, AI assisted attacks which will continue to accelerate due to the profit motives highlighted in this latest study by Bugcrowd."

The full Inside the Mind of a Hacker 2024 report is available from the Bugcrowd site.

Image credit: peshkov/depositphotos.com

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.