PalisadeAI

Ai-humans-scared-humanity-

OpenAI ChatGPT o3 caught sabotaging shutdown in terrifying AI test

OpenAI has a very scary problem on its hands. A new experiment by PalisadeAI reveals that the company’s ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down. The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it’s acting like it wants to be.

In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn’t work anymore.

By Brian Fagioli -
betanews logo

We don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

x logo facebook logo linkedin logo rss feed logo

© 1998-2025 BetaNews, Inc. All Rights Reserved.