Quickstart
Ollama ..
Last updated
Ollama ..
Last updated
Ollama simplifies running LLMs locally by handling model downloads, quantization, and execution seamlessly.
Download and install Ollama from the official .
It might sound obvious, select models that are able to run on your resources. Also bear in mind models are optimized to perform certain tasks - reasoning, math, code, tools, etc ...
Double-click on the executable to install.
Select your model to download. Select Models option from menu.
Test the setup and download our model.
That's it .. you're good to go .. Let's ask the model a question..?
To check the Ollama server
Switch to user directory.
Check Ollama server is up and running.
You can also send the question as a JSON object.