Workflows
Let flows do the work ..
Last updated
Let flows do the work ..
Last updated
You have several options on how you wish implement the Agents Docker containers:
Docker Desktop on Windows - deploy containers in Windows
WSL + Docker - deploy containers in a Linux OS running as a subsystem on Windows
Linux + Docker - deploy containers on Linux
MacOS + Docker - deploy containers on MacOS
As most automated pipelines run in a Linux OS, the Projects / Workshops are optimized for WSL + Docker.
Local LLM (Large Language Model) serving with both CPU and GPU options
A web interface for interacting with Ollama models
Vector database for semantic search capabilities
Database for storing n8n workflows and data
A workflow automation platform that can connect to various services
Low-code platform for building AI workflows
Open WebUI is a feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.
The easiest installation method is to pull a whole bunch of Docker images.
Create a GenAI directory & copy over the GenAI files.
Copy GenAI STACK files.
Change to deployment directory.
Rename .env.example to .env.
Check you have the required resources.
Deploy containers.
After the container is running, access Open WebUI at.
Log in to Get Started.
Enter your details to create an Admin account.
Admin Creation: The first account created on Open WebUI gains Administrator privileges, controlling user management and system settings.
User Registrations: Subsequent sign-ups start with Pending status, requiring Administrator approval for access.
Privacy and Data Security: All your data, including login details, is locally stored on your device. Open WebUI ensures strict confidentiality and no external requests for enhanced privacy and security.
All models are private by default. Models must be explicitly shared via groups or by being made public. If a model is assigned to a group, only members of that group can see it. If a model is made public, anyone on the instance can see it.
To update your local Docker installation to the latest version, you can either use Watchtower or manually update the container.
On the left sidebar in Portainer, click on Stacks > Add stack.
In the Name field type in watchtower.
Copy Paste the code below in the Portainer Stacks Web editor.
Scroll down and click: 'Deploy the Stack' blue button.
Ensure the following steps have been completed if you're running
With , you can automate the update process for all your containers.
Navigate to the community site:
Navigate to the community site:
, Tools can be used by assigning them to any LLM that supports function calling and then enabling that Tool.
For example, if you're running a local web server on port 3000, Ngrok can create a public URL like "" that forwards all traffic to your local server.
Sign up for a - its free.
Log into .
Open in your browser to set up n8n.
Open in your browser to set up n8n. You’ll only have to do this once. You are NOT creating an account with n8n in the setup here, it is only a local account for your instance!
Open the included workflow:
Ollama URL: - using container name
Qdrant URL: (API key can be whatever since this is running locally)
Google Drive: Follow . Don't use localhost for the redirect URI, just use another domain you have, it will still work! Alternatively, you can set up .
Open in your browser to set up Open WebUI.
The function is also .
To open n8n at any time, visit in your browser.
To open Open WebUI at any time, visit .
With your n8n instance, you’ll have access to over 400 integrations and a suite of basic and advanced AI nodes such as , , and nodes. To keep everything local, just remember to use the Ollama node for your language model and Qdrant as your vector store.