Scammers turn to AI to improve their campaigns

Hacker

The latest quarterly Consumer Cyber Safety Pulse Report from Norton looks at how cybercriminals can use artificial intelligence to create more realistic and sophisticated threats.

Tools like ChatGPT have captured people's attention recently and it seems cybercriminals have noticed them too. Its impressive ability to generate human-like text that adapts to different languages and audiences also makes it great for generating malicious threats.

The report also highlights that this ability can be used to create and spread misinformation to shape narratives, skew product review rankings and more.

While ChatGPT makes developers' lives easier with its ability to write and translate source code, it can also make cybercriminals' lives easier by making scams faster to create and more difficult to detect. We've already seen how it can offer tips for hacking websites.

"I'm excited about large language models like ChatGPT, however, I'm also wary of how cybercriminals can abuse it. We know cybercriminals adapt quickly to the latest technology, and we're seeing that ChatGPT can be used to quickly and easily create convincing threats," says Kevin Roundy, senior technical director of Norton. "It's getting harder than ever for people to spot scams on their own, which is why Cyber Safety solutions that look at more aspects of our digital lives -- from our mobile devices to our online identity, and the wellbeing of those around us -- are needed to keep us safe in our evolving world."

You can read more on the Norton blog along with advice on how to stay safe from this latest generation of AI-enhanced threats.

Image credit: denisismagilov/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.