Microsoft launches Azure AI Content Safety to filter harmful content from users and AI

AI safety

Microsoft has announced the general availability of its Azure AI Content Safety service. The system is designed to help platform operators to detect and filter harmful user-generated content and that generated by artificial intelligence in their services and apps.

Azure AI Content Safety can be used to scan and block text and image-based content, and Microsoft has made it highly customizable so it can be tailored for various modalities. The company says that the service can be used to weed out content that is offensive, risky, or undesirable, such as profanity, adult content, gore, violence, hate speech, and so on.

See also:

Making the announcement about the availability of Azure AI Content Safety, Microsoft points out that the benefits of controlling content in this way are multi-fold. As well as helping to improve the experience for users by eliminating unwanted or harmful content, brand image is also protected by avoiding association with such content.

The company highlights various features of Azure AI Content Safety, pointing particularly to its ability to handle multiple languages. Microsoft refers to "360-degress comprehensive safety" in relation to the service's text and image analysis capabilities. Added to this is the ability to categorize content for filtering in different ways, and severity indication to allow for better threat analysis.

In a blog post about the service, Microsoft's Louise Han says:

We are excited to announce the general availability of Azure AI Content Safety, a new service that helps you detect and filter harmful user-generated and AI-generated content in your applications and services. Content Safety includes text and image detection to find content that is offensive, risky, or undesirable, such as profanity, adult content, gore, violence, hate speech, and more.

She goes on to add:

The impact of harmful content on platforms extends far beyond user dissatisfaction. It has the potential to damage a brand's image, erode user trust, undermine long-term financial stability, and expose the platform to potential legal liabilities.

It is crucial to consider more than just human-generated content, especially as AI-generated content becomes prevalent. Ensuring the accuracy, reliability, and absence of harmful or inappropriate materials in AI-generated outputs is essential. Content safety not only protects users from misinformation and potential harm but also upholds ethical standards and builds trust in AI technologies. By focusing on content safety, we can create a safer digital environment that promotes responsible use of AI and safeguards the well-being of individuals and society as a whole.

The Microsoft Responsible AI team is constructing a multi-layered system with robust safety models and technologies for these very reasons.

You can find out more in the blog post here, as well as using Microsoft's interactive Azure AI Content Safety Studio to view, explore, and try out sample code.

Image credit: ankabala / depositphotos

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.