Analyzing and Moderating Text With Azure AI Content Safety Service Using Python SDK
Description
Azure Content Safety is a service that uses machine learning to identify and moderate potentially unsafe content in text, images, and videos. The service can be used to identify and moderate content that is inappropriate, offensive, or harmful. It detects both user and AI-generated content, and can be used to moderate content in real-time or in batch mode.
Organizations can use Azure Content Safety to moderate user-generated content in real-time, such as comments, chat messages, and social media posts. The service can also be used to moderate content in batch mode, such as images and videos uploaded to a website or app. Azure Content Safety can be used to moderate content in a wide range of industries, including social media, gaming, e-commerce, and education.
In this hands-on lab, you will learn how to use Azure Content Safety to moderate text content for inappropriate language. You will use the Azure Content Safety SDK for Python to interact with the Content Safety API and analyze text content for inappropriate language.
Learning objectives
Upon completion of this advanced level lab, you will be able to:
- Understand the features and capabilities of Azure Content Safety
- Use the Azure Content Safety SDK for Python to interact with the Content Safety API
- Analyze text content for inappropriate language using Azure Content Safety
Intended audience
- Candidates for Azure AI Engineer Associate certification (AI-102)
- AI Engineers
- Prompt Engineers
- Cloud Architects
- Data Engineers
- Machine Learning Engineers
Prerequisites
Familiarity with the following will be beneficial but is not required:
- Azure Python SDK
- Azure AI services
The following content can be used to fulfill the prerequisites:
- Interacting with Azure Web App using Python SDK
- Creating a Language Understanding Model Using Azure Language Service
Updates
June 17, 2024 - Updated step guidance and added screenshot to reflect the latest UI changes.