🦙Quickstart
Ollama ..
Ollama
Ollama simplifies running LLMs locally by handling model downloads, quantization, and execution seamlessly.
Download and install Ollama from the official website.
Models
It might sound obvious, select models that are able to run on your resources. Also bear in mind models are optimized to perform certain tasks - reasoning, math, code, tools, etc ...
Double-click on the executable to install.
Select your model to download. Select Models option from menu.

Test the setup and download our model.
ollama run deepseek-r1:7b

That's it .. you're good to go .. Let's ask the model a question..?

Some Useful Commands
/set Set session variables
/show Show model information
/load <model> Load a session or model
/save <model> Save your current session
/clear Clear session context
/bye Exit
/?, /help Help for a command
/? shortcuts Help for keyboard shortcuts
Use """ to begin a multi-line message.
To check the Ollama server
ollama serve
Switch to user directory.
cd $HOME
Check Ollama server is up and running.
Get-NetTCPConnection -LocalPort 11434 -ErrorAction SilentlyContinue

You can also send the question as a JSON object.
$jsonContent = '{
"model": "deepseek-r1",
"messages": [{ "role": "user", "content": "Solve: 2 + 2" }],
"stream": false
}'
[System.IO.File]::WriteAllText("$HOME\request.json", $jsonContent)
curl.exe -X POST http://localhost:11434/api/chat -d "@$HOME\request.json" -H "Content-Type: application/json"

Last updated