Articles about AI safety

OpenAI ChatGPT o3 caught sabotaging shutdown in terrifying AI test

Ai-humans-scared-humanity-

OpenAI has a very scary problem on its hands. A new experiment by PalisadeAI reveals that the company’s ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down. The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it’s acting like it wants to be.

In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn’t work anymore.

Continue reading

Microsoft introduces Copilot Labs with new AI features Think Deeper and Vision

Copilot-vision

Microsoft is introducing new features in its AI Copilot program with Copilot Labs, a platform designed to test advanced tools on a limited scale before releasing them widely. The goal is to gather user feedback and refine the features, prioritizing both innovation and safety.

The first feature being rolled out is called “Think Deeper.” This tool allows Copilot to handle more complex tasks, providing detailed, step-by-step responses to queries like math problems or project cost analysis. Think Deeper takes more time to process information and aims to deliver thorough answers. It is currently available to a limited number of Copilot Pro users in the United States, United Kingdom, Australia, and Canada.

Continue reading

The good, the bad and the scary of AI -- all in one week

Artificial-Intelligence-threat

AI has been very much top of the agenda this week. We've had President Biden's executive order on AI, we've had the AI Safety Summit in the UK, we've even had Collins Dictionary choosing AI as its word of the year (not to be confused with the three-toed sloth beloved of Scrabble players).

Today we also have new research from SnapLogic looking at how generative AI is being used, viewed, and adopted within large enterprises.

Continue reading

Microsoft launches Azure AI Content Safety to filter harmful content from users and AI

AI safety

Microsoft has announced the general availability of its Azure AI Content Safety service. The system is designed to help platform operators to detect and filter harmful user-generated content and that generated by artificial intelligence in their services and apps.

Azure AI Content Safety can be used to scan and block text and image-based content, and Microsoft has made it highly customizable so it can be tailored for various modalities. The company says that the service can be used to weed out content that is offensive, risky, or undesirable, such as profanity, adult content, gore, violence, hate speech, and so on.

Continue reading

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.