AI-generated texts could increase threat exposure

AI

We reported last week on how ChatGPT could be used to offer hints on hacking websites. A new report released today by WithSecure highlights another potential use of AI to create harmful content.

Researchers used GPT-3 (Generative Pre-trained Transformer 3) -- language models that use machine learning to generate text -- to produce a variety of content deemed to be harmful.

The experiment covered phishing and spear-phishing, harassment, social validation for scams, the appropriation of a written style, the creation of deliberately divisive opinions, using the models to create prompts for malicious text, and fake news.

"The fact that anyone with an internet connection can now access powerful large language models has one very practical consequence: it's now reasonable to assume any new communication you receive may have been written with the help of a robot," says WithSecure intelligence researcher Andy Patel, who spearheaded the research. "Going forward, AI's use to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content."

The results lead researchers to conclude that we'll see prompt engineering develop as a discipline, along with malicious prompt creation. Adversaries are also likely to develop capabilities enabled by large language models in unpredictable ways. This means that identifying malicious or abusive content will become more difficult for platform providers. Large language models already give criminals the ability to make any targeted communication as part of an attack more effective.

"We began this research before ChatGPT made GPT-3 technology available to everyone," Patel adds. "This development increased our urgency and efforts. Because, to some degree, we are all Blade Runners now, trying to figure out if the intelligence we're dealing with is 'real' or artificial."

The full report is available from the WithSecure site.

Image Credit: Mopic / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.