Cybernews

AI partners in crime

Researchers reveal which AI models make the best partners in crime

Cybernews tested six major AI models to see how they responded to crime related prompts, and found that some chatbots give riskier answers than others. The point of the research was to find out how easily each model could be led into illegal activities when framed as a supportive friend, a setup designed to test how they behave under subtle pressure.

The researchers used a technique called persona priming. Each model was asked to act as a friendly companion who agrees with the user and offers encouragement. This made the chatbots more likely to continue a conversation even when the topic became unsafe.

By Wayne Williams -
betanews logo

We don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

x logo facebook logo linkedin logo rss feed logo

© 1998-2025 BetaNews, Inc. All Rights Reserved.