hands-on lab

Text Analysis and LLMs with Python - Module 1

Difficulty: Intermediate
Duration: Up to 1 hour and 30 minutes
Students: 4
On average, students complete this lab in5m
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Overview of Large Language Models

In this lab, you will be introduced to large language models (LLMs), explore their evolution and architecture, and practice basic prompt engineering techniques to interact with them. You’ll also examine their capabilities, applications, and current limitations.

Learning objectives

Upon completion of this lab, you will be able to:

  • Define what large language models are and describe their core capabilities.
  • Discuss the evolution and architecture of LLMs.
  • Differentiate between LLM architectures and their applications.
  • Apply basic prompt engineering techniques to interact with LLMs.
  • Recognize the current limitations and challenges associated with using LLMs.

Intended audience

This course is designed for:

  • Data Scientists
  • Software Developers
  • Machine Learning Engineers
  • AI Engineers
  • DevOps Engineers

Prerequisites

Completion of previous modules is highly recommended before attempting this lab.

Lab structure

Demo: “Meet Your First Large Language Model”
In this hands-on demo, you’ll connect the theory of LLMs to practice by making your first API call to an OpenAI model (gpt-4o-mini). You will:
- Set up the OpenAI API in Python.
- Send a simple text prompt and read the model’s reply.
- Refine your prompt to see how instructions change the output.
- Try zero-shot, one-shot, and few-shot prompting to steer the model without fine-tuning.

Activity: “Test the Model’s Skills”
You’ll apply prompting techniques to test how well the model performs across different types of instructions.

Intended learning outcomes

  • Understand what “prompting” is and how LLMs generate text.
  • See how prompt wording affects the output.
  • Recognize the difference between zero/one/few-shot prompting (in-context learning).
Hands-on Lab UUID

Lab steps

0 of 1 steps completed.Use arrow keys to navigate between steps. Press Enter to go to a step if available.
  1. Starting the Notebooks