Microsoft has announced a new AI-powered moderation service that aims to create safer and more inclusive online communities. The service, called Azure AI Content Safety, is part of the Azure AI product platform and offers a range of AI models that can detect and flag inappropriate content across images and text.
According to Microsoft, the models can understand text in eight languages: English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese. They can also assign a severity score to the flagged content, indicating to moderators what content requires action. The service can protect against biased, sexist, racist, hateful, violent and self-harm content, Microsoft says.
Azure AI Content Safety is based on the same OpenAI GPT-4 technology that powers Microsoft's chatbot in Bing Chat and GitHub's Copilot AI-powered code-generating service. Microsoft says it has been working on solutions for online content moderation for over two years and has improved the models to account for context and cultural differences.
Azure AI Content Safety is integrated with Azure OpenAI Service, Microsoft's fully managed service that gives businesses access to OpenAI's technologies with added governance and compliance features. But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms. Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.
Creating a Safer Online Experience
Microsoft says Azure AI Content Safety is similar to other AI-powered toxicity detection services such as Perspective API, maintained by Google's Counter Abuse Technology Team and Jigsaw. However, Microsoft claims its service is more comprehensive and transparent than its competitors.
Microsoft hopes Azure AI Content Safety will help online platforms foster more positive and respectful interactions among their users. The service is available now through the Azure portal. If you want to know more about Microsoft's AI announcements at Build 2023, the following links will take you directly to the relevant story: