Jan.ai

One stop shop to get up and running ..

Introduction

In this workshop we're gong to implement a simple solution using existing open-source applications.

  • install Jan.ai on Windows

  • choose the right model for your hardware

  • difference between Q4, Q8, and GGUF models

  • run Qwen, DeepSeek AI, and other AI models locally

Jan.ai

Jan.ai

Most people think running AI models locally is complicated. It's not. Anyone can run powerful AI models like DeepSeek, Llama, and Mistral on their own computer.

This guide will show you how, even if you've never written a line of code.

Jan uses llama.cpp, an inference that makes AI models run efficiently on regular computers. It's like a translator that helps AI models speak your computer's language, making them run faster and use less memory.

  1. Download Jan.ai.

Link to Jan.ai
Jan.ai

Windows

  1. Double-click on the .exe

  2. Choose a model that fits your hardware: Click on the Hub option in the left menu bar.

Take a look at the different models ..

For example for coding - qwen2.5 7B coder Instruct Q4

Models
Machine
Model Size
Description

Laptop 16GB

3B - 7B models

Like having a helpful assistant

Laptop 32GB

7B - 13B models

Better at complex tasks like coding and analysis

Desktop 64GB

13B+ models

Great for professional work and advanced tasks

These are different versions of the same AI model, just packaged differently to work better on different computers:

  • Q4 versions: Like a "lite" version of an app - runs fast and works on most computers

  • Q6 versions: The "standard" version - good balance of speed and quality

  • Q8 versions: The "premium" version - highest quality but needs a more powerful computer

  1. Once downloaded you're good to go ..

Jan.ai - System Monitor
  1. If you're lucky enough to have some Graphics cards then in Settings > Advanced Settings you can enable GPU Acceleration.

Advanced settings

Last updated