Enhancing Generative AI Models With Retrieval-Augmented Generation (RAG)
Description
Large language models (LLMs) are already proven to be capable of generating human-like responses, but these responses can be enhanced to provide more accurate and relevant information. Retrieval-Augmented Generation (RAG) is a technique that combines the strengths of LLMs and information retrieval systems to generate text that is both fluent and factually accurate.
In this lab, you will learn about Retrieval-Augmented Generation, its use cases, and common components. You will also learn how to implement RAG in a Python application.
Learning objectives
Upon completion of this beginner-level lab, you will be able to:
- Explain the concept of Retrieval-Augmented Generation (RAG) and its use cases
- Implement RAG in a Python application using LangChain and Amazon Bedrock
Intended audience
- Candidates for the AWS Certified Machine Learning Specialty certification
- Cloud Architects
- Software Engineers
Prerequisites
Familiarity with the following will be beneficial but is not required:
- Python
- Amazon Bedrock
The following content can be used to fulfill the prerequisites:
- Employing Generative AI for Development With Amazon Bedrock
- Optimizing Prompts For Large Language Models Using Amazon Bedrock
Updates
December 16th, 2024 - Resolved an issue preventing the lab from deploying