In this lab, you will learn what prompt engineering is, why it matters, and how to design effective prompts for language models. You’ll practice zero/one/few-shot prompting, chain-of-thought (for step-by-step tasks), and sampling controls to balance precision and creativity.
Upon completion of this lab, you will be able to:
This course is designed for:
Completion of previous modules is highly recommended before attempting this lab.
Demo: Mastering Prompt Engineering with GPT-4o-mini
In this demo, you will explore how small wording changes can dramatically change LLM outputs. You will:
- Break down the components of a strong prompt (role, audience, task, constraints, format).
- Use zero-/one-/few-shot prompting to steer behaviour.
- Apply concise chain-of-thought prompting for step-by-step tasks.
- Tune sampling controls like temperature and top-p (and top-k where supported).
Intended learning outcomes:
- Craft clearer prompts using role, audience, task, constraints, and format.
- Use zero/one/few-shot examples to guide the model.
- Trigger brief step-wise reasoning when appropriate.
- Adjust temperature/top-p to make outputs more precise or more creative.
Activity: Designing the Perfect Virtual Travel Guide
In this activity, you will improve a virtual travel-guide chatbot by tightening prompt structure, steering style with examples, adding light reasoning where needed, and tuning sampling for creativity vs precision.
Intended learning outcomes:
- Identify weaknesses in vague prompts and improve them with structure.
- Apply zero-shot, one-shot, and few-shot examples to guide behaviour.
- Use brief chain-of-thought prompting for multi-step questions.
- Adjust creativity with temperature