Jan.ai
One stop shop to get up and running ..
Last updated
One stop shop to get up and running ..
Last updated
In this workshop we're gong to implement a simple solution using existing open-source applications.
install Jan.ai on Windows
choose the right model for your hardware
difference between Q4, Q8, and GGUF models
run Qwen, DeepSeek AI, and other AI models locally
This section is for those that have the resources to run local LLMs.
Most people think running AI models locally is complicated. It's not. Anyone can run powerful AI models like DeepSeek, Llama, and Mistral on their own computer.
This guide will show you how, even if you've never written a line of code.
Download Jan.ai.
Double-click on the .exe
Choose a model that fits your hardware: Click on the Hub option in the left menu bar.
Jan will indicate if a model might be Slow on your device or Not enough GPU / RAM based on your system specifications.
Laptop 16GB
3B - 7B models
Like having a helpful assistant
Laptop 32GB
7B - 13B models
Better at complex tasks like coding and analysis
Desktop 64GB
13B+ models
Great for professional work and advanced tasks
Once downloaded you're good to go ..
If you're lucky enough to have some Graphics cards then in Settings > Advanced Settings you can enable GPU Acceleration.
Jan uses , an inference that makes AI models run efficiently on regular computers. It's like a translator that helps AI models speak your computer's language, making them run faster and use less memory.
Tool configurations (like settings)
This feature is currently experimental and must be enabled through in Advanced Settings.