hands-on lab

Customizing Large Language Models Using Ollama

Difficulty: Beginner
Duration: Up to 1 hour
Students: 148
Rating: 4.3/5
On average, students complete this lab in25m
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Ollama is a tool for creating, customizing, and running Large Language Models. It can be used via the command line, a web interface, or an API. By using Ollama, you can run a model locally which may be advantageous if you do not want to share your data with an LLM service provider.

Learning how to start, use, and configure Ollama is a valuable skill for anyone looking to integrate Large Language Models or Generative AI into their applications or workflows.

In this hands-on lab, you will connect to a virtual machine and use Ollama to run and customize a model.

Learning objectives

Upon completion of this beginner-level lab, you will be able to:

  • Start the Ollama server
  • Create and implement an Ollama model file
  • Use the Ollama API
  • Analyze log entries with Ollama

Intended audience

  • Anyone looking to learn about Large Language Models
  • Cloud Architects
  • Data Engineers
  • DevOps Engineers
  • Machine Learning Engineers
  • Software Engineers

Prerequisites

Familiarity with the following will be beneficial but is not required:

  • Ollama
  • Large Language Models
  • The Bash shell

The following content can be used to fulfill the prerequisites:

Updates

October 15th 2024 - Resolved ollama command issues

Environment before

Environment after

Covered topics

Hands-on Lab UUID

Lab steps

Logging In to the Amazon Web Services Console
Connecting to the Virtual Machine Using EC2 Instance Connect
Creating an Ollama Model File
Customizing an Ollama Model