🦙Ollama & Chatbox

Frontend Chat UI ..

Introduction

Ollama is an open-source platform designed to run large language models (LLMs) locally on personal computers. It allows users to download, run, and fine-tune various AI models like Llama, Mistral, and other open-source LLMs without requiring cloud services or API calls.

Ollama features a simple command-line interface, an API for integration with other applications, and supports both CPU and GPU acceleration.

Ollama

Ollama

As we're going to be using Ollama .. alot .. let's run through a workshop that explores how to effectively use Ollama on Windows 11, macOS, and Linux.

We'll focus on practical commands, real-world applications, and cross-platform techniques.

Prerequisites:

  • Ollama already installed on your system

  • Basic familiarity with command line interfaces

  • At least 8GB RAM (16GB+ recommended for larger models)

Getting Started

Let's start by discovering what models are available, pulling and selecting ones to use ..

  1. List Ollama models.

# List all available models that can be pulled
ollama list -a

# List all locally installed models
ollama list
  1. Let's pull several models to compare.

# Pull the default Llama 3 model
ollama pull llama3

# Pull Mistral model
ollama pull mistral

# For code-specific tasks
ollama pull codellama
  1. Examine model details.

# Get detailed information about a model
ollama show llama3

# Check model parameters
ollama show llama3 | grep "parameter"

x

Last updated