hands-on lab

Apply guardrails to prevent the output of harmful content

Difficulty: Intermediate
Duration: Up to 2 hours and 7 minutes
Students: 2
On average, students complete this lab in15m
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Apply guardrails to prevent the output of harmful content

Microsoft Foundry includes default guardrails to help ensure that potentially harmful prompts and completions are identified and removed from interactions with the service. Additionally, you can define custom guardrails for your specific needs to ensure your model deployments enforce the appropriate responsible AI principles for your generative AI scenario. Content filtering is one element of an effective approach to responsible AI when working with generative AI models.

In this hands‑on lab, you will explore the effects of guardrails in Foundry.

Learning objectives

Upon completion of this intermediate level lab, you will be able to learn

  • Language models
  • generative AI
  • responsible AI principles

Intended audience

  • Data Engineers
  • DevOps Engineers
  • Machine Learning Engineers
  • Software Engineers

Prerequisites

Familiarity with the following will be beneficial but is not required

  • Basic machine learning concepts
  • LLMs
  • Generative AI Models
Hands-on Lab UUID

Lab steps

0 of 2 steps completed.Use arrow keys to navigate between steps. Press Enter to go to a step if available.
  1. Accessing the Lab's Microsoft Foundry Project
  2. Apply guardrails to prevent the output of harmful content