hands-on lab

Optimizing Prompts For Large Language Models Using Amazon Bedrock

Difficulty: Beginner
Duration: Up to 1 hour
Students: 101
Rating: 5/5
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Large Language Models (LLMs) are powerful tools capable of summarization, classification, and question-answering. They also have mathematical and logical reasoning capabilities. To use an LLM most effectively, you need to craft and design a concise and accurate prompt.

Learning how to develop and design prompts for LLMs is a valuable skill that will benefit anyone looking to work with generative AI models.

In this hands-on lab, you will compare and contrast poor prompts with better ones, and you will examine techniques to improve the responses you receive.

Learning objectives

Upon completion of this beginner-level lab, you will be able to:

  • Summarize and classify information using an LLM
  • Control the randomness of the model's responses
  • Specify the format of the response you want
  • Improve the quality of the responses you receive

Intended audience

  • Anyone looking to use Large Language Models in their workflow or applications
  • Cloud Architects
  • Data Engineers
  • DevOps Engineers
  • Machine Learning Engineers
  • Software Engineers

Prerequisites

Familiarity with the following will be beneficial but is not required:

  • Amazon Bedrock
  • Large Language Models

The following content can be used to fulfill the prerequisites:

Environment before

Environment after

Covered topics

Lab steps

Logging In to the Amazon Web Services Console
Structuring Prompts and Controlling Responses
Controlling Output and Solving Complex Problems