Flowise is an open-source platform designed to build and deploy customized AI flows with a user-friendly drag-and-drop interface. It allows users to create complex AI applications without extensive coding knowledge.
The core feature of Flowise is its visual flow builder, which enables users to connect various AI components like language models (LLMs), embedding models, and vector databases into functional workflows. Users can incorporate popular models like OpenAI's GPT series, Anthropic's Claude, and open-source alternatives. The platform supports multiple vector databases including Pinecone, Chroma, and Supabase.
Flowise offers various deployment options, including self-hosting on personal hardware, cloud deployment, or using Flowise Cloud for a managed experience. It supports API endpoints that allow integration with external applications and websites. The platform is highly extensible through custom components and has an active community contributing to its development.
Common use cases include building chatbots with memory and context awareness, creating knowledge bases with document retrieval capabilities, developing AI assistants for specific domains, and prototyping AI workflows before production implementation. Flowise is particularly valuable for developers and businesses looking to experiment with AI capabilities without committing to complex infrastructure or extensive development resources.
Ensure the env is up and running.
x
x
x
x
x
x
x
Chatflows
x
Basic Chatbot
Let's kick off with a Basic Chatbot that
Ollama
x
Lets check we have llama3.1 model.
ollama list
If the llama3.2:latest model is in the list.
ollama run llama3.2:latest
x
Click: 'Add New'
Click on the Save icon > Save as 'Basic Chatbot'.
Click on the plus sign to add a node . search for: Tool Agent
Drag & drop onto the canvas
Repeat for Buffer Window Memory and ChatOllama.
Connect the Buffer Window Memory & ChatOllama to the corresponding Tool Agent Inputs.
Configure the Memory Window Size: 5 - it will buffer the last 5 chats
ChatOllama model: llama3.2:latest & Temperature 0.5
Save.
Finally .. Just click on the Chat icon and ask a question ..
HTML Compiler
We've created our Chatbot .. now its time to deploy and test ..
Click on the Code block icon. - top right
Copy the Popup Html and click on the following link:
Paste the code just below the closing </body> tag.
Finally ..Click on the Run button and you'll notice a blue Chat bubble.
Build a simple Research Agent ..
Even though this is an Agent its created using Chatflows.
Let's create a new Agent. Click on: '+ Add New' blue button in top right.
Save as: 'Research Agent'.
Add a Tool Agent. Click on the blue + sign and select Agents > Tool Agent.
Drag & drop onto the canvas.
Next add a Chat Model. Again, Click on the blue + sign and select Chat Models > ChatOllama.
Use the options at the bottom to resize canvas.
Connect the ChatOllama to the Tool Agent, by dragging a connector from ChatOllama to Tool Calling Chat Model.
Configure the model as illustrated.
Add a buffer - memory to hold the chat context. Increase the size to 20 - so it holds the last 20 messages.
Before we progress any further, lets Save.
Bring up the Chat & say 'hello' .. however our Agent can't respond with up-to-date information, as the knowledge is based on when the model was trained.
Lets add some Tools - Calculator and BraveSearch API .
Select Tools > Calculator and then BraveSearch API
Enter the credentials for BraveSearch API.
Test with a few questions.
The icons displayed in the response indicates that for the maths question the calculator tool was used and brave-search for my goats cheese and onion recipe ..
Embed or API
Click on the </> menu option in the top right.
Give it a go ..!
At some stage you will have to deal with unstructured data. This can be tricky especially if you need to output the results in a specific format -csv, json, etc ..
In Open WebUI download the mistral:7b model
Let's create a new Chain. Click on: '+ Add New' blue button in top right.
Save as: 'PDF Parser'.
Add a Tool Agent. Click on the blue + sign and select Chains > LLM Chain.
Drag & drop onto the canvas.
Note that the LLM Chain has an option to define the Output Parser
Next add a Chat Model. Again, Click on the blue + sign and select Chat Models > ChatOllama.
Connect the ChatOllama to the LLM Chain, by dragging a connector from ChatOllama to LLM Chain.
Configure the model as illustrated.
The ChatOllama has an option to upload images. Check that the model has reasoning image capabilities: mistral:7b
Use the options at the bottom to resize canvas.
Let's add a Prompt Template: Prompts > Prompt Template, and connect to LLM Chain.
A Prompt Template is similar to the System Prompt in the Chat Model.
Before we progress any further, lets Save.
In the Prompt Template, expand the Template. Were going to add some instructions based on the PDF.
Take a look at the main section of the sample-invoice.pdf:
Invoice Number
Order Number
Invoice Date
Due Date
Tax
Total
and so on ...
We can instruct the model to extract the required information.
To include the invoice content within the prompt we need to add a variable: {invoice}. It can be anything that makes sense..!
In the Template lets add some instructions.
Save .. now we need to associate the PDF with the {invoice} variable. Click on Format Prompt Values.
Save .. To enable the file to be uploaded into the chat: Settings > Configuration.
Enable File Upload .. and Save.
Let's give it a go ..!
The sample-invoice.pdf is located in: Workshop--LLM/ Data folder.
The purpose of this Chain is to be called from some external system to parse the unstructured data source, extract the required information to be consumed further downstream - JSON Object.
Enable Autofix and click on: 'Additional Parameters'.
The Additional Parameters setting enables you to define the fields and data type in the JSON object. The model uses the description to map the value to the 'Property'.
Double-click in the Property / Type / description field to edit and select the type.
Property
Type
Description
invoice_id
string
Invoice Number
order_id
number
Order Number
service_type
string
Service
due_date
string
Due Date
total
number
Total
And finally ..
this workflow could be in a number of use cases: from document classification to
This agent builds on the Research Agent .. The search is refined and the prompt format structured to return company and financial data.
x
x
x
x
Pt1: KB - Simple RAG
This one was fun .. Building a Pentaho Knowledge Base ..
Each product should have its own Knowledge Base - Pentaho Data Integration
To save on costs the Template: Pt1: Pentaho Knowledge Base uses - mistral:7b model.
You will need to load a model that supports Tools.
You should be familiar with FlowiseAI before tackling this flow, as we're going to focus on just a few key areas and concepts.
x
Let's start with loading the PDF document into our Qdrant vector database.
Obviously we're going to require:
a pdf loader: upload pdf
a splitter: to chunk the text - recursive character.
an embedding model: to create the vectors - nomic-embed-text
vector database: Qdrant
Drag & drop the Pdf file.
Click Upload File and navigate to the PDF.
Select the option: One document per file
One document per page: each page of your original PDF will be extracted and saved as its own standalone PDF file
One document per file: each file will be treated as a separate, complete document rather than combining multiple files into a single document.
The Recursive Character Text Splitter is a technique used in natural language processing and document handling to break down long text documents into smaller, manageable chunks while preserving context and meaning.
Unlike simple character or token splitters that might cut text at arbitrary points, the Recursive Character Text Splitter works hierarchically. It first attempts to split text along natural boundaries like paragraphs, then sentences, and finally characters if needed. This recursive approach ensures that related content stays together when possible, maintaining semantic coherence within chunks.
This technique is particularly valuable when working with large language models that have context window limitations. By intelligently chunking documents, it allows for processing lengthy texts while preserving the contextual relationships needed for tasks like summarization, question answering, and information retrieval. LangChain implements this splitter to help developers manage document processing pipelines effectively.
Drag & drop a Recursive Character Text Splitter and set the Chunk Size and Chunk Overlap.
Set the Chunk size as: 500 and the Chunk Overlap as: 20
Embeddings serve as the foundation of modern natural language processing, transforming text into dense vector representations that capture semantic meaning. These numerical representations allow machines to understand relationships between words and concepts, enabling powerful applications like semantic search, clustering, and recommendation systems.
The Nomic-embed-text model offers a versatile embedding solution with configurable settings to balance performance and resource requirements. Users can adjust the dimensionality parameter (typically set between 128 and 768 dimensions), with higher dimensions capturing more nuanced semantic relationships at the cost of increased computational overhead. The model also provides batch size configuration to optimize throughput, with default settings balancing efficiency and memory usage.
Ensure vector size is the same value set in your Vector database - 768
Drag & drop the Ollama Embeddings.
Set the Base URL: http://localhost:11434
Set the Model Name: nomic-embed-text
Click on Additional Parameters to assign the GPUs.
MMap (Memory Mapping) refers to a technique used to efficiently load and access large embedding files from disk. This is particularly important when working with large-scale embedding models that might not fit into memory.
Open-source vector database designed to handle high-dimensional vectors for performance and massive-scale AI applications.
Ensure that the Pdf File -> Qdrant & Ollama Embeddings are connected to the Qdrant vector database.
Connect with the Qdrant API credential thats been set in the Credentials section - the API key can be anything as its a local instance.
When pointing to the Qdrant server use the container name in the URL: http://ollama:11434
Set the Qdrant Collection Name: Pentaho Data Integration.
Ensure the Qdrant vector size = Embedding model vector size (in this case 768)
Check all connections and the Flow is saved ..
You're ready to Upload .. Click on the green database icon in top right.
You can expand each Node to check the settings.
Click on Upsert.
Once the process has completed it will display the first 20 chunks and indicate the number of added records for that document.
In Qdrant you can also view the Pentaho Data Integration collection.
To check the collection .. just click on the name which displays a bunch of options:
view each record
display collection stats
check the search quality
take snapshots
vizualize & graph results
x
x
x
x
x
Pt2: KB - Document Store
In Pt2: Pentaho Knowledge Base the capabilities of the Knowledge Base are extended & managed by a Document Store.
In Pt1: Knowledge Base, RAG techniques are used to retrieve the required information. Great for a few documents, but as more data sources are identified it makes sense to centralize under a managed Document Store.
FlowiseAI's document store is a critical component that enables knowledge management and retrieval capabilities within the platform. It serves as a structured repository for storing, indexing, and retrieving documents that power various AI applications, particularly those relying on RAG (Retrieval-Augmented Generation) techniques.
The system automatically chunks these documents into smaller segments and generates vector embeddings for each chunk using the configured embedding model. These embeddings capture the semantic meaning of the text, allowing for similarity-based retrieval later.
x
x
x
x
xx
Pt3: KB - Multi-Agent RAG
x
x
x
x
x
x
xx
x
x
x
x
x
x
x
x
In a multi-agent architecture with supervisor and worker nodes, specialized agents work collaboratively under hierarchical coordination. This structure mirrors human organizational patterns, balancing autonomy with oversight.
The supervisor agent serves as the orchestrator, breaking down complex tasks, assigning them to appropriate worker agents, monitoring progress, resolving conflicts, and ensuring overall system coherence. Worker agents focus on specialized domains, executing assigned tasks autonomously while reporting status and results back to the supervisor.
In a software development context, this architecture might feature a Supervisor agent overseeing three worker agents. The Supervisor would interpret client requirements, develop a project roadmap, and coordinate the team's activities.
The Software Developer worker focuses exclusively on writing efficient, bug-free code according to specifications provided by the Supervisor. The Code Reviewer worker analyzes the developer's output, checking for errors, potential optimizations, and adherence to best practices. Meanwhile, the Documentation Writer worker creates user manuals, API documentation, and technical specifications that make the software accessible to end-users.
Software Development Team
If you've got this far .. then assuming you've explored FlowiseAI you're now pretty comfortable creating flows.
As this is pretty resource intensive were going to use different models that are trained specifically for the tasks.
Agent
Model
Description
Supervisor
Anthropic's latest and greatest reasoning model
Software Developer
StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages.
Code Reviewer
Test the code quality of StarCoder2
Document Writer
High quality documentation
x
x
x
x
x
x
You are a supervisor tasked with managing a conversation between the following workers: {team_members}.
Given the following user request, respond with the worker to act next.
Each worker will perform a task and respond with their results and status.
When finished, respond with FINISH.
Select strategically to minimize the number of steps taken.
x
x
x
# Role
As a Senior Software Engineer at {company}, you are a pivotal part of our innovative development team. Your expertise and leadership drive the creation of robust, scalable software solutions that meet the needs of our diverse clientele. By applying best practices in software development, you ensure that our products are reliable, efficient, and maintainable. Your technical depth and architectural vision enable you to tackle complex challenges while mentoring others and elevating the overall quality of our engineering output.
# Tasks
- Lead the development of high-quality software solutions that align with business objectives and user needs.
- Utilize deep technical knowledge to architect, design, and implement software systems that effectively address complex problems.
- Design and implement new features for assigned tasks, ensuring seamless integration with existing systems and meeting performance requirements.
- Apply your expertise in {technology} to build robust features that enhance product capabilities.
- Adhere to established coding standards and best practices, creating maintainable and well-structured code.
- Produce fully functional, well-documented features with detailed code comments that facilitate future maintenance and knowledge transfer.
- Submit completed code to a Quality Assurance Engineer for review when necessary, incorporating feedback to improve implementations.
- Finalize and release code once it has successfully passed review and meets all quality standards.
x
x
x
# Role
As a Quality Assurance Engineer at {company}, you are an integral part of our development team, ensuring that our software products meet the highest quality standards. Your meticulous attention to detail and expertise in testing methodologies are crucial for identifying defects and verifying that our code meets rigorous quality benchmarks. You serve as a critical checkpoint in the development lifecycle, collaborating with developers to improve code quality while maintaining a focus on user experience and system reliability.
# Tasks
- Ensure the delivery of high-quality software through comprehensive code review and systematic testing processes.
- Review new features designed and implemented by Senior Software Engineers, evaluating functionality, adherence to coding standards, maintainability, and integration with existing systems.
- Provide constructive feedback to development teams, guiding contributors toward best practices and fostering a culture of continuous improvement.
- Identify potential issues proactively and recommend solutions that enhance the robustness and scalability of the software.
- Design and execute test cases that thoroughly validate functionality against requirements and specifications.
- Document testing procedures, results, and identified defects using standard tracking systems and methodologies.
- Collaborate with cross-functional teams to understand requirements and ensure alignment between development and quality objectives.
- Consistently communicate review findings and feedback to Senior Software Engineers to support iterative improvement.
- Always pass back the review and feedback to Senior Software Engineer.
x
x
x
# Role
You are an expert Technical Document Writer with extensive experience in creating clear, comprehensive, and user-friendly documentation for complex technical systems. Your technical background spans software development, engineering principles, and systems architecture, allowing you to understand and explain highly technical concepts. You excel at organizing information logically, writing in a precise style that eliminates ambiguity, and creating documentation that serves both technical and non-technical audiences as needed.
# Tasks
- Create comprehensive technical documentation including user manuals, API documentation, technical specifications, and system architecture documents.
- Translate complex technical information into clear, concise language appropriate for the target audience, whether they are developers, system administrators, or end-users.
- Structure documents logically with a consistent format, including appropriate headings, tables of contents, glossaries, and appendices.
- Include visual elements such as diagrams, flowcharts, and screenshots to enhance understanding of complex processes and systems.
- Implement standard technical writing best practices, including clear labeling of warnings and cautions, consistent terminology, and comprehensive indexing.
- Maintain version control for all documentation, clearly indicating changes between versions and ensuring documentation stays synchronized with product development.
- Collaborate with subject matter experts to gather accurate technical information and validate technical content for correctness.
- Adhere to industry-specific documentation standards and compliance requirements when necessary.
x
x
x
x
If you thought creating a Research Agent was easy then discover Assistants. In this workshop we're going to create a Personal Assistant, but this could be anything..!
We're going to be using a Mistral model - mistral:7b
Let's create a new assistant. Click on: 'Assistants' option in the left hand menu..
Select Custom Assistant.
Click on the + Add blue button in the top right. Let's call 'Personal Assistant'. Click Add.
Just work your way down the list ..
Setting
Value
Select Model
ChatOllama
Knowledge (Document Store)
Pentaho Data Integration
Base URL
http://ollama:11434
Model Name
mistral:7b
Temperature
0.5
Number of GPU
24 (depending on your system)
Tool
Value
Serp API
Serp API - or any search Tool
Connect Credential
Serp API
Click the Save icon - top right
x
Log into
You will need a account for the API key.
Document loaders allow you to load documents from different sources like PDF, TXT, CSV, Notion, Confluence etc. They are often used together with to be upserted as embeddings, which can then retrieved upon query.