LLM
  • Overview
    • LLM
  • Key Concepts
    • Models
    • Key Concepts
  • Quckstart
    • Jan.ai
    • 🦙Ollama & Chatbox
  • Playground
  • Workflows
    • n8n
    • Flowise
      • Basic Chatbot
      • Research Agent
      • Company Information Agent
      • PDF Parser
      • Knowledge Base
      • Lead Gen
    • Agno
  • LLM Projects
    • RAG + ChainLit
  • Model Context Protocol
    • Claude + MCPs
    • Flowise + MCPs
  • Knowledge Graphs
    • neo4j
    • WhyHow.ai
  • Setup
    • WSL & Docker
    • 🦙Quickstart
    • Key Concepts
    • Workflows
    • Model Context Protocol
Powered by GitBook
On this page
  1. Setup

Workflows

Let flows do the work ..

PreviousKey ConceptsNextModel Context Protocol

Last updated 2 months ago

Workflows Setup

You have several options on how you wish implement the Agents Docker containers:

  • Docker Desktop on Windows - deploy containers in Windows

  • WSL + Docker - deploy containers in a Linux OS running as a subsystem on Windows

  • Linux + Docker - deploy containers on Linux

  • MacOS + Docker - deploy containers on MacOS

As most automated pipelines run in a Linux OS, the Projects / Workshops are optimized for WSL + Docker.

Container
Description

Local LLM (Large Language Model) serving with both CPU and GPU options

A web interface for interacting with Ollama models

Vector database for semantic search capabilities

Database for storing n8n workflows and data

A workflow automation platform that can connect to various services

Low-code platform for building AI workflows

Open WebUI

Open WebUI is a feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

Clone Workshop--LLM git repository.

The easiest installation method is to pull a whole bunch of Docker images.

gh repo clone jporeilly/Workshop--LLM
  1. Create a GenAI directory & copy over the GenAI files.

cd
mkdir ~/GenAI
cd ~/GenAI
  1. Copy GenAI STACK files.

cd
cd ~/GenAI
cp -rvpi ~/Workshop--LLM/GenAI/* ~/Workshop--LLM/GenAI/.* .
ls -al

Deploy Openweb UI & Supporting Apps

  1. Change to deployment directory.

cd
cd GenAI/
  1. Rename .env.example to .env.

mv .env.example .env

Take a look at the .env file and if required change the Postgres credentials.


Open WebUI

  1. Check you have the required resources.

watch nvidia-smi
  1. Deploy containers.

cd 
cd GenAI/
docker-compose up -d
  1. After the container is running, access Open WebUI at.

  1. Log in to Get Started.

  1. Enter your details to create an Admin account.

User Roles & Privacy

  • Admin Creation: The first account created on Open WebUI gains Administrator privileges, controlling user management and system settings.

  • User Registrations: Subsequent sign-ups start with Pending status, requiring Administrator approval for access.

  • Privacy and Data Security: All your data, including login details, is locally stored on your device. Open WebUI ensures strict confidentiality and no external requests for enhanced privacy and security.

    • All models are private by default. Models must be explicitly shared via groups or by being made public. If a model is assigned to a group, only members of that group can see it. If a model is made public, anyone on the instance can see it.


To update your local Docker installation to the latest version, you can either use Watchtower or manually update the container.

 docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower

By default watchTower will run a check every 24 hrs from the time you deployed the container.

Follow the instructions below to set a schedule:

  1. On the left sidebar in Portainer, click on Stacks > Add stack.

  2. In the Name field type in watchtower.

  3. Copy Paste the code below in the Portainer Stacks Web editor.

services:
  watchtower:
    image: containrrr/watchtower:latest
    container_name: watchtower
    hostname: watchtower
    mem_limit: 512m
    mem_reservation: 128m
    cpu_shares: 512
    security_opt:
      - no-new-privileges=true
    read_only: true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      TZ: Europe/London
      WATCHTOWER_CLEANUP: true # Remove old images after updating
      WATCHTOWER_REMOVE_VOLUMES: false # Remove attached volumes after updating
      DOCKER_API_VERSION: 1.45 # SSH docker version 1.45 for Docker engine version 26.1.3
      WATCHTOWER_INCLUDE_RESTARTING: true # Restart containers after update
      WATCHTOWER_INCLUDE_STOPPED: false # Update stopped containers
      WATCHTOWER_SCHEDULE: "0 9 * * 1" # Update & Scan containers every Monday at 9am GMT.
      WATCHTOWER_LABEL_ENABLE: false
      WATCHTOWER_ROLLING_RESTART: true
      WATCHTOWER_TIMEOUT: 30s
      WATCHTOWER_LOG_FORMAT: pretty
    restart: on-failure:5
  1. Scroll down and click: 'Deploy the Stack' blue button.

To find out which containers have been updated, look at the watchtower logs.

Ollama Docker Connection

To manage your Ollama instance in Open WebUI, follow these steps.

  1. Ensure your Ollama server is up and running - Powershell.

Get-NetTCPConnection -LocalPort 11435 -ErrorAction SilentlyContinue
  1. Log into Open WebUI.

  1. Navigate to: Settings > Connections.

  2. Click on the + sign and configure a connection to Ollama.

Use the container name in the URL - http://ollama:11434

The API key can be anything as this is a local installation.


Pull Models

Follow the instructions below to check the ollama connection and pull models.

  1. Go to Admin Settings in Open WebUI.

  1. Navigate to Connections > Ollama > Manage (click the wrench icon).

  1. You can also download models using the Model Selector.


Using the Ollama CLI to pull Models

You can directly access the Ollama container and pull models using the CLI:

# Connect to the running Ollama container
docker exec -it ollama /bin/sh

# Once inside the container, pull the model you want
ollama pull modelname

Replace modelname with the specific model you want to add (like llama3, mistral, gemma, etc.).

Open WebUI

Now Open Web UI is up and running, time to take a look at the extensive features

Explore the Features ..

x

x

The Knowledge section is a storage area within Open WebUI where you can save specific pieces of information or data points.

Think of it as a reference library that Open WebUI can use to make its responses more accurate and relevant to your needs.

x

x

x

x

  • Tools extend the abilities of LLMs, allowing them to collect real-world, real-time data like weather, stock prices, etc.

  • Functions extend the capabilities of the Open WebUI itself, enabling you to add new AI model support (like Anthropic or Vertex AI) or improve usability (like creating custom buttons or filters).

  • Pipelines are more for advanced users who want to transform Open WebUI features into API-compatible workflows—mainly for offloading heavy processing.

Tools are like plugins that the LLM can use to gather real-world, real-time data. So, with a "weather tool" enabled, the model can go out on the internet, gather live weather data, and display it in your conversation.

  • Real-time weather predictions 🛰️.

  • Stock price retrievers 📈.

  • Flight tracking information ✈️.

  1. Click on the Tool you wish to import

  2. Click the blue “Get” button in the top right-hand corner of the page

  3. Click “Download as JSON export”

  4. You can now upload the Tool into Open WebUI by navigating to Workspace => Tools and clicking “Import Tools”

  1. Click on the Tool you wish to import.

  1. Click the “Get” button.

  1. Enter the IP address of your Open WebUI instance - http://localhost:3000 - and click “Import to WebUI” which will automatically open your instance and allow you to import the Tool.

  1. Click Save.


  1. Navigate to Workspace => Models.

  1. Click the pencil icon to edit the model settings, scroll down to the Tools section and check any Tools you wish to enable.

  1. Once done you must click save.

Now that Tools are enabled for the model, you can click the “+” icon when chatting with an LLM to use various Tools.

x

WebUI

Ngrok

Ngrok is a powerful developer tool that creates secure tunnels between local development environments and the public internet. It essentially allows developers to expose local servers to the internet, even when they're behind NAT or firewalls, making it invaluable for development, testing, and demonstrations.

At its core, Ngrok works by creating a secure tunnel from your local machine to Ngrok's servers, which then provide a public URL that can route traffic back to your local application.

This is particularly useful when developing webhook integrations, testing mobile applications, sharing work-in-progress features with clients, or debugging applications in different environments.

ngrok will allow public access to your personal computer (via the port you specify), even if your computer is behind a firewall.

If you are uncomfortable with the security implications, you should skip the ngrok steps and only access your servers locally via localhost.

  1. Install Ngrok agent.

Linux

curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
  sudo gpg --dearmor -o /etc/apt/keyrings/ngrok.gpg && \
  echo "deb [signed-by=/etc/apt/keyrings/ngrok.gpg] https://ngrok-agent.s3.amazonaws.com buster main" | \
  sudo tee /etc/apt/sources.list.d/ngrok.list && \
  sudo apt update && sudo apt install ngrok

Windows

x

MacOS - use Homebrew

brew install ngrok
  1. Test everything is working by running ngrok -h

  1. Copy the value and run this command to add the authtoken in your terminal.

ngrok config add-authtoken TOKEN
  1. Start ngrok tunnel by running the following command - Open Web UI listens on port 3000.

ngrok http 3000

Now open the Forwarding URL in your browser and you should see your local web service.

  • That URL is available to anyone in the world.

  • You are now using TLS (notice the 🔒 in your browser window) with a valid certificate without making any changes to your local service.

For free accounts, a static Ngrok domain can also be assigned to the application.

  1. From the menu, select: Universal Gateway > Domains.

  1. Click on the ... > Start Tunnel.

  1. Copy and paste the command -adjust for app port: 3000

ngrok http --url=warm-anchovy-growing.ngrok-free.app 3000

The OpenWeb Chatbot will always be available on the static URL:

http://warm-anchovy-growing.ngrok-free.app


Traffic Dashboard

  1. To inspect the traffic.

Qdrant

x

x

x

x

x

  1. Enter your admin account

⚡️ Quick start and usage

The core of the Self-hosted AI Starter Kit is a Docker Compose file, pre-configured with network and storage settings, minimizing the need for additional installations. After completing the installation steps above, simply follow the steps below to get started.

The main component of the self-hosted AI starter kit is a docker compose file pre-configured with network and disk so there isn’t much else you need to install. After completing the installation steps above, follow the steps below to get started.

  1. Create credentials for every service:

    Postgres: use DB, username, and password from .env. Host is postgres

  2. Select Test workflow to start running the workflow.

  3. If this is the first time you’re running the workflow, you may need to wait until Ollama finishes downloading Llama3.1. You can inspect the docker console logs to check on the progress.

  4. Make sure to toggle the workflow as active and copy the "Production" webhook URL!

  5. Go to Workspace -> Functions -> Add Function -> Give name + description then paste in the code from n8n_pipe.py

  6. Click on the gear icon and set the n8n_url to the production URL for the webhook you copied in a previous step.

  7. Toggle the function on and now it will be available in your model dropdown in the top left!

x

x

x

x

Ensure the following steps have been completed if you're running

Updating

With , you can automate the update process for all your containers.

Download and import manually

Navigate to the community site:

Import via your Open WebUI URL

Navigate to the community site:

How can I use Tools?

, Tools can be used by assigning them to any LLM that supports function calling and then enabling that Tool.

For example, if you're running a local web server on port 3000, Ngrok can create a public URL like "" that forwards all traffic to your local server.

Sign up for a - its free.

Log into .

Open in your browser to set up n8n.

Open in your browser to set up n8n. You’ll only have to do this once. You are NOT creating an account with n8n in the setup here, it is only a local account for your instance!

Open the included workflow:

Ollama URL: - using container name

Qdrant URL: (API key can be whatever since this is running locally)

Google Drive: Follow . Don't use localhost for the redirect URI, just use another domain you have, it will still work! Alternatively, you can set up .

Open in your browser to set up Open WebUI.

The function is also .

To open n8n at any time, visit in your browser.

To open Open WebUI at any time, visit .

With your n8n instance, you’ll have access to over 400 integrations and a suite of basic and advanced AI nodes such as , , and nodes. To keep everything local, just remember to use the Ollama node for your language model and Qdrant as your vector store.

WSL2 Ubuntu + Ollama + CUDA
​
Watchtower
​
https://openwebui.com/tools/
​
https://openwebui.com/tools/
​
Once installed
https://92832de0.ngrok.io
Qdrant
http://localhost:5678/
http://localhost:5678/
http://localhost:5678/workflow/vTN9y2dLXqTiDfPT
http://ollama:11434
http://qdrant:6333
this guide from n8n
local file triggers
http://localhost:3000/
published here on Open WebUI's site
http://localhost:5678/
http://localhost:3000/
AI Agent
Text classifier
Information Extractor
Ollama
Open WebUI
Qdrant
Postgres
n8n
FlowiseAI
http://localhost:8080localhost
http://localhost:3000localhost
Link to Open WebUI
Watchtower
Link to WatchTower
⭐ Features | Open WebUI
Link to Open Web UI documentation
Link to Community Tools
Logo
Tools | Open WebUI CommunityOpen WebUI
Tools | Open WebUI CommunityOpen WebUI
Link to Community Tools
ngrok - secure introspectable tunnels to localhost
Link to Ngrok
Link to Traffic Inspector
Link to Qdrant
Link to n8n
http://localhost:4040/inspect/httplocalhost
http://localhost:6333/dashboardlocalhost
http://localhost:5678localhost
Logo
Open WebUI Stack
GPU cards
Deploy Containers
Admin account
Ollama connection
Ollama connection
Admin Panel
Download Model
Model Selector
Models - manual pull
Community Tools
Enter IP for Web UI
Added to Tools
Enable Models
Enable Tools - weather
Ngrok
Test Ngrok
Ngrok up and running
Static domain
Ngrok - access to Open WebUI
Ngrok dashboard
Qdrant UI
Logo
Logo
Logo
Ngrok account