🦙Quickstart

Ollama ..

Ollama

Ollama simplifies running LLMs locally by handling model downloads, quantization, and execution seamlessly.

  1. Download and install Ollama from the official website.

Link to Ollama

Models

  1. Double-click on the executable to install.

  2. Select your model to download. Select Models option from menu.

  1. Test the setup and download our model.

ollama run deepseek-r1:7b
DeepSeek 7B
  1. That's it .. you're good to go .. Let's ask the model a question..?

Chat away ..

Some Useful Commands

  /set            Set session variables
  /show           Show model information
  /load <model>   Load a session or model
  /save <model>   Save your current session
  /clear          Clear session context
  /bye            Exit
  /?, /help       Help for a command
  /? shortcuts    Help for keyboard shortcuts

Use """ to begin a multi-line message.
  1. To check the Ollama server

ollama serve
  1. Switch to user directory.

cd $HOME
  1. Check Ollama server is up and running.

Get-NetTCPConnection -LocalPort 11434 -ErrorAction SilentlyContinue
Check Ollama connection
  1. You can also send the question as a JSON object.

$jsonContent = '{
  "model": "deepseek-r1",
  "messages": [{ "role": "user", "content": "Solve: 2 + 2" }],
  "stream": false
}'

[System.IO.File]::WriteAllText("$HOME\request.json", $jsonContent)
curl.exe -X POST http://localhost:11434/api/chat -d "@$HOME\request.json" -H "Content-Type: application/json"
JSON request to Ollama server.

Useful, as there's now have an endpoint that can be used for out Chat UI frontend.

Last updated