hands-on lab

Implementing Safeguards for AI Applications With Amazon Bedrock Guardrails

Difficulty: Beginner
Duration: Up to 30 minutes
Students: 22
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Amazon Bedrock Guardrails is a feature that allows developers to implement safeguards in their generative AI applications to ensure responsible and safe use. It provides the ability to deny specific topics, filter content based on categories like hate speech or violence, and protect sensitive information like personally identifiable information (PII).

In this lab, you will configure an Amazon Bedrock Guardrail and evaluate its effectiveness in the AWS management console.

Learning objectives

Upon completion of this beginner-level lab, you will be able to:

  • Configure an Amazon Bedrock Guardrail
  • Evaluate and test the Amazon Bedrock Guardrail in the AWS management console

Intended audience

  • Candidates for the AWS Certified Machine Learning Specialty certification
  • Cloud Architects
  • Software Engineers

Prerequisites

Familiarity with the following will be beneficial but is not required:

  • Amazon Bedrock

The following content can be used to fulfill the prerequisites:

Environment before

Environment after

Covered topics

Lab steps

Logging In to the Amazon Web Services Console
Configuring an Amazon Bedrock Guardrail