How AI is weaponized for cyberattacks

A new report from Abnormal Security highlights real-world examples of how AI is being used to carry out cyberattacks.

Generative AI allows scammers to craft unique email content, making detection that relies on matching known malicious text strings infinitely more difficult.

Mike Britton, CISO at Abnormal Security, writes on the company's blog, "To illustrate how AI is being weaponized, we've collected real-world examples of likely AI-generated malicious emails our customers have received in the last year. These examples point to a startling conclusion: threat actors have clearly embraced the malicious use of AI. This also means that organizations must respond in kind -- by implementing AI-powered cybersecurity solutions to stop these attacks before they reach employee inboxes."

Analysis of attacks, powered by the Giant Language Model Test Room http://gltr.io/ (GLTR) reveals how likely it is that they were generated by AI. The report reveals attacks claiming to be from insurance companies, Netflix and cosmetics brands, which all show signs of having been AI generated.

Britton continues, "Despite the fact that generative AI has only been used widely for a year, it is obvious that the potential is there for widespread abuse. For security leaders, this is a wake up call to prioritize cybersecurity measures to safeguard against these threats before it is too late. The attacks shown here are well-executed, but they are only the beginning of what is possible."

The report also suggests that we're reaching a stage where only AI will be able to stop AI attacks as it can detect attacks that would bypass conventional security solutions.

You can read more and get the full report on the Abnormal Security blog.

Image credit: sdecoret/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.