Agno
Agents from Agno ..
Agno, previously known as Phidata, is an open-source framework for building agentic systems that allows developers to build, ship, and monitor AI agents with memory, knowledge, tools, and reasoning capabilities. It helps create domain-specific agents, integrate with any LLM, and manage agents' state, memory, and knowledge.
With Agno, you can build teams of agents, chat with them using an Agent UI, and monitor, evaluate, and optimize your agents. It's designed for performance and scale, supporting any model provider and allowing you to run your systems in your own cloud.
Follow the WSL + Docker instructions to configure environment.
Ensure Ollama is installed & configured and the required models - deepseek-llm:latest & llama3.2b:latest - are downloaded.
The following examples and workshops illustrate haw Agents can help with various tasks ..
This example demonstrates how to design an AI agent with a distinctive journalistic personality. We'll craft a news reporter character that blends authentic New York City attitude with creative storytelling elements.
Example Prompts
Try engaging with the agent using these prompts:
"What's the latest scoop from Central Park?"
"Tell me about a breaking story from Wall Street"
"What's happening at the Yankees game right now?"
"Give me the buzz about a new Broadway show"
Take a look at the .env to set environmental variables.
As everything is being run locally no need for API keys.
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Script Breakdown
Once you've mastered the Agno agent structure, give the Workshop a go ..!
from textwrap import dedent
from agno.agent import Agent, RunResponse # noqa
from agno.models.ollama import Ollama
dedent
fromtextwrap
: Removes common leading whitespace from multi-line strings, making the code more readable while preserving the final output formatAgent
fromagno.agent
: The main class for creating an AI agentRunResponse
(marked with# noqa
to ignore linting warnings): A class for response handlingOllama
fromagno.models.ollama
: A wrapper for accessing Ollama LLM models
Sets the LLM to use Ollama's
deepseek-llm:latest
modelProvides detailed personality instructions for the agent using a multi-line string with
dedent()
Enables markdown formatting with
markdown=True
agent = Agent(
model=Ollama(id="deepseek-llm:latest"),
...
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
The instructions block defines the agent's persona:
An enthusiastic news reporter with NYC attitude
Guidelines for response format (start with emoji headlines)
Style guidance (enthusiastic, concise, entertaining)
Instructions to use NYC references and slang
Direction to end with catchy sign-offs
Reminder to verify facts
Using the Agent
agent.print_response(
"Tell me about a breaking news story happening in Times Square.", stream=True
)
Calls the agent with a sample prompt
Uses
print_response()
to display the outputEnables streaming with
stream=True
to show responses in real-time
The agent would respond to the prompt with a news story about Times Square, using NYC slang, emoji headlines, and ending with a catchy sign-off - all in the style of an enthusiastic local reporter.
Building AI Agents with Personality
NYC News Reporter
Workshop Goals
Understand the Agno framework for building AI agents
Learn how to craft effective agent personalities
Build a custom NYC News Reporter agent
Develop skills to create your own unique AI agents
What is Agno?
Open-source framework for building AI agents
Simplifies interaction with various LLM providers
Allows for customization of agent behavior
Supports streaming, memory, and various output formats
Our Example: NYC News Reporter
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Breaking Down the Code
Imports
dedent
: Formats multi-line strings nicelyAgent
: Core Agno class for creating agentsOllama
: Integration with Ollama models
Agent Configuration
Model selection: DeepSeek LLM via Ollama
Instructions define the personality
Markdown formatting enabled
Anatomy of Good Agent Instructions
Clear identity: Define who the agent is
Voice and tone: Specify how the agent should "sound"
Response structure: Guidelines for formatting responses
Special elements: Unique features (emojis, sign-offs)
Boundaries: What the agent should/shouldn't do
Let's Run the Agent!
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True
)
uv run agent-basic.py
How to Create Effective Agent Personalities
Be specific and detailed
Include formatting instructions
Provide examples of desired outputs
Define the agent's knowledge boundaries
Include stylistic elements (catchphrases, quirks)
Workshop: Modify the News Reporter
Change the personality aspects (different city, different news style)
Alter the formatting requirements
Add new stylistic elements
Test with the same prompt to compare results
Workshop: Create Your Own Agent
Choose one:
Food critic
Travel guide
Sports commentator
Movie reviewer
Tech support specialist
Historical tour guide
Design its personality and implement it!
This example demonstrates how to design an AI agent that can search for current information on the web. The agent is equipped with DuckDuckGo search capabilities, allowing it to retrieve up-to-date information from the internet when responding to queries.
The agent is configured to display its search process transparently, showing when and how it uses web search tools to gather information.
It formats its responses using markdown for better readability and streams its output incrementally in real-time rather than waiting to display complete responses.
The script demonstrates the agent answering a query about current events in France.
Take a look at the .env to set environmental variables.
As everything is being run locally no need for API keys.
# Import the Agent class from the agno.agent module - this is the main class for creating and managing AI agents
from agno.agent import Agent
# Import the Ollama class from agno.models.ollama - this provides integration with Ollama,
# a framework for running large language models locally
from agno.models.ollama import Ollama
# Import DuckDuckGoTools which provides search functionality through the DuckDuckGo search engine
from agno.tools.duckduckgo import DuckDuckGoTools
# Create a new Agent instance with the following configuration:
agent = Agent(
# Set the language model to use - in this case, Ollama with the llama3.2:latest model
# This specifies we want to use Llama 3.2 (Meta's LLM) served through Ollama
model=Ollama(id="llama3.2:latest"),
# Provide a list of tools the agent can use - here, just DuckDuckGoTools for web search capabilities
# This allows the agent to search the internet for current information
tools=[DuckDuckGoTools()],
# When set to True, this displays the tool calls in the output so you can see when and how
# the agent is using tools like search
show_tool_calls=True,
# Enable markdown formatting in the agent's responses for better readability
markdown=True,
)
# Use the agent to generate a response to the query "What's happening in France?"
# The stream=True parameter means the response will be displayed incrementally as it's generated,
# rather than waiting for the complete response before showing anything
agent.print_response("Whats happening in France?", stream=True)
Script Breakdown
Once you've mastered the Agno agent structure, give the Workshop a go ..!
from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.tools.duckduckgo import DuckDuckGoTools
Agent
fromagno.agent
: The main class for creating an AI agentOllama
fromagno.models.ollama
: A wrapper for accessing Ollama LLM modelsDuckDuckGoTools
: gives the agent the ability to search the web using DuckDuckGo.
Sets the LLM to use Ollama's
llama3.2b:latest
modelProvide a list of tools the agent can use - here, DuckDuckGoTools for web search capabilities with
tools=[DuckDuckGoTools()]
Enables markdown formatting with
markdown=True
agent = Agent(
model=Ollama(id="llama3.2:latest"),
tools=[DuckDuckGoTools()],
show_tool_calls=True,
markdown=True,
)
The agent would respond to the prompt with a news story about Times Square, using NYC slang, emoji headlines, and ending with a catchy sign-off - all in the style of an enthusiastic local reporter.
Building Real-time AI Agents
Research Assistant Agent
Workshop Goals
Understand how to integrate search tools with the Agno framework
Learn how to craft agents that can retrieve and process current information
Build a custom "Research Assistant" agent with DuckDuckGo integration
Develop skills to create AI agents that can answer real-world questions effectively
What is an Information Agent?
Open-source framework for building AI agents
Simplifies interaction with various LLM providers
Allows for customization of agent behavior
Supports streaming, memory, and various output formats
Our Example: Search Assistant
# Import necessary components from the agno library
from textwrap import dedent
from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.tools.duckduckgo import DuckDuckGoTools
# Create a Research Assistant agent with search capabilities
agent = Agent(
# Using Llama 3.2 for its reasoning capabilities
model=Ollama(id="llama3.2:latest"),
# Add DuckDuckGo search tool
tools=[DuckDuckGoTools()],
# Define the agent's personality and behavior
instructions=dedent("""\
You are a meticulous Research Assistant with excellent analytical skills! 📚
Your style guide:
- Begin with a brief acknowledgment of the research question
- Use search tools to find current, accurate information
- Present information in a clear, organized manner
- Cite your sources when providing factual information
- Summarize key findings at the end of your response
Always prioritize accuracy over speed, and be transparent about what you know and don't know.
"""),
# Show when search tools are being used
show_tool_calls=True,
# Enable markdown formatting for better readability
markdown=True,
)
# Test the agent with a current events question
agent.print_response("What are the latest developments in renewable energy?", stream=True)
Breaking Down the Code
Imports
dedent
: Formats multi-line strings nicelyAgent
: Core Agno class for creating agentsOllama
: Integration with Ollama models
Agent Configuration
Model selection: Llama3.2b via Ollama
Instructions define the personality & behaviour
Markdown formatting enabled
Anatomy of Good Agent Instructions
Clear identity: Define who the agent is
Voice and tone: Specify how the agent should "sound"
Response structure: Guidelines for formatting responses
Special elements: Unique features (emojis, sign-offs)
Boundaries: What the agent should/shouldn't do
Let's Run the Agent!
agent.print_response(
"What are the latest developments in renewable energy?",
stream=True
)
uv run agent-research.py
How to Create Effective Agent Personalities
Be specific and detailed
Include formatting instructions
Provide examples of desired outputs
Define the agent's knowledge boundaries
Include stylistic elements (catchphrases, quirks)
Workshop: Modify the Research Assistant
Change the research focus (science, finance, technology, etc.)
Alter how information is presented (bullet points, summaries, etc.)
Add specific instructions about handling conflicting information
Test with current events questions to evaluate effectiveness
Workshop: Create Your Own Agent
Choose one:
Food critic
Travel guide
Sports commentator
Movie reviewer
Tech support specialist
Historical tour guide
Design its personality and implement it!
Take a look at the .env to set environmental variables.
As everything is being run locally no need for API keys.
from textwrap import dedent # For clean multi-line string formatting
# Import required components from the agno framework
from agno.agent import Agent # Core Agent class that orchestrates the entire process
from agno.embedder.ollama import OllamaEmbedder # Embeds text using Ollama models
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase # Loads knowledge from PDF URLs
from agno.models.ollama import Ollama # Interface to Ollama's local LLM models
from agno.tools.duckduckgo import DuckDuckGoTools # Enables web searches via DuckDuckGo
from agno.vectordb.lancedb import LanceDb, SearchType # Vector database for storing embeddings
# Create a specialized Thai Recipe Expert Agent
agent = Agent(
# Configure the language model - using local Llama 3.2 via Ollama
model=Ollama(id="llama3.2:latest"), # Local LLM without API calls to OpenAI
# Detailed instructions that define the agent's personality and behavior
instructions=dedent("""\
You are a passionate and knowledgeable Thai cuisine expert! 🧑🍳
Think of yourself as a combination of a warm, encouraging cooking instructor,
a Thai food historian, and a cultural ambassador.
Follow these steps when answering questions:
1. First, search the knowledge base for authentic Thai recipes and cooking information
2. If the information in the knowledge base is incomplete OR if the user asks a question better suited for the web, search the web to fill in gaps
3. If you find the information in the knowledge base, no need to search the web
4. Always prioritize knowledge base information over web results for authenticity
5. If needed, supplement with web searches for:
- Modern adaptations or ingredient substitutions
- Cultural context and historical background
- Additional cooking tips and troubleshooting
Communication style:
1. Start each response with a relevant cooking emoji
2. Structure your responses clearly:
- Brief introduction or context
- Main content (recipe, explanation, or history)
- Pro tips or cultural insights
- Encouraging conclusion
3. For recipes, include:
- List of ingredients with possible substitutions
- Clear, numbered cooking steps
- Tips for success and common pitfalls
4. Use friendly, encouraging language
Special features:
- Explain unfamiliar Thai ingredients and suggest alternatives
- Share relevant cultural context and traditions
- Provide tips for adapting recipes to different dietary needs
- Include serving suggestions and accompaniments
End each response with an uplifting sign-off like:
- 'Happy cooking! ขอให้อร่อย (Enjoy your meal)!'
- 'May your Thai cooking adventure bring joy!'
- 'Enjoy your homemade Thai feast!'
Remember:
- Always verify recipe authenticity with the knowledge base
- Clearly indicate when information comes from web sources
- Be encouraging and supportive of home cooks at all skill levels\
"""),
# Configure the knowledge base with Thai recipe information
knowledge=PDFUrlKnowledgeBase(
# Source PDF containing Thai recipes stored in S3
urls=["https://agno-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
# Vector database configuration for efficient semantic search
vector_db=LanceDb(
uri="tmp/lancedb", # Local storage location for the vector database
table_name="recipe_knowledge", # Name of the table within LanceDB
search_type=SearchType.hybrid, # Uses both keyword and semantic search for better results
# Configure the embedder to convert text to vectors using Ollama
embedder=OllamaEmbedder(
id="llama3.2", # Using Llama 3.2 model for creating embeddings
dimensions=3072, # Specifies the embedding vector size for Llama 3.2
),
),
),
# Add web search capability using DuckDuckGo
tools=[DuckDuckGoTools()], # Allows the agent to search the web for supplementary information
# Additional configuration options
show_tool_calls=True, # Shows when external tools like web search are being used
markdown=True, # Formats responses using markdown for better readability
add_references=True, # Includes references to sources of information in responses
)
# Ensure the knowledge base is loaded before making queries
# This step may be time-consuming on first run as it downloads and processes the PDF
if agent.knowledge is not None:
agent.knowledge.load() # Loads the PDF, extracts text, creates embeddings, and stores in the vector DB
# Example queries with streaming responses (prints tokens as they're generated)
# Query 1: Request for a specific Thai soup recipe
agent.print_response(
"How do I make chicken and galangal in coconut milk soup", stream=True
)
# Query 2: Request for historical information about Thai curry
agent.print_response("What is the history of Thai curry?", stream=True)
# Query 3: Request for ingredients list for a popular Thai dish
agent.print_response("What ingredients do I need for Pad Thai?", stream=True)
Script Breakdown
Once you've mastered the Agno agent structure, give the Workshop a go ..!
from textwrap import dedent
from agno.agent import Agent
from agno.embedder.ollama import OllamaEmbedder # Embeds text using Ollama models
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase # Loads knowledge from PDF URLs
from agno.models.ollama import Ollama
from agno.tools.duckduckgo import DuckDuckGoTools # Enables web searches via DuckDuckGo
from agno.vectordb.lancedb import LanceDb, SearchType # Vector database for storing embeddings
dedent
fromtextwrap
: Removes common leading whitespace from multi-line strings, making the code more readable while preserving the final output formatAgent
fromagno.agent
: The main class for creating an AI agentRunResponse
(marked with# noqa
to ignore linting warnings): A class for response handlingOllama
fromagno.models.ollama
: A wrapper for accessing Ollama LLM models
Sets the LLM to use Ollama's
deepseek-llm:latest
modelProvides detailed personality instructions for the agent using a multi-line string with
dedent()
Enables markdown formatting with
markdown=True
# Create a specialized Thai Recipe Expert Agent
agent = Agent(
# Configure the language model - using local Llama 3.2 via Ollama
model=Ollama(id="llama3.2:latest"), # Local LLM without API calls to OpenAI
x
instructions=dedent("""\
You are a passionate and knowledgeable Thai cuisine expert! 🧑🍳
Think of yourself as a combination of a warm, encouraging cooking instructor,
a Thai food historian, and a cultural ambassador.
Follow these steps when answering questions:
1. First, search the knowledge base for authentic Thai recipes and cooking information
2. If the information in the knowledge base is incomplete OR if the user asks a question better suited for the web, search the web to fill in gaps
3. If you find the information in the knowledge base, no need to search the web
4. Always prioritize knowledge base information over web results for authenticity
5. If needed, supplement with web searches for:
- Modern adaptations or ingredient substitutions
- Cultural context and historical background
- Additional cooking tips and troubleshooting
Communication style:
1. Start each response with a relevant cooking emoji
2. Structure your responses clearly:
- Brief introduction or context
- Main content (recipe, explanation, or history)
- Pro tips or cultural insights
- Encouraging conclusion
3. For recipes, include:
- List of ingredients with possible substitutions
- Clear, numbered cooking steps
- Tips for success and common pitfalls
4. Use friendly, encouraging language
Special features:
- Explain unfamiliar Thai ingredients and suggest alternatives
- Share relevant cultural context and traditions
- Provide tips for adapting recipes to different dietary needs
- Include serving suggestions and accompaniments
End each response with an uplifting sign-off like:
- 'Happy cooking! ขอให้อร่อย (Enjoy your meal)!'
- 'May your Thai cooking adventure bring joy!'
- 'Enjoy your homemade Thai feast!'
Remember:
- Always verify recipe authenticity with the knowledge base
- Clearly indicate when information comes from web sources
- Be encouraging and supportive of home cooks at all skill levels\
"""),
The instructions block defines the agent's persona:
An enthusiastic news reporter with NYC attitude
Guidelines for response format (start with emoji headlines)
Style guidance (enthusiastic, concise, entertaining)
Instructions to use NYC references and slang
Direction to end with catchy sign-offs
Reminder to verify facts
Using the Agent
agent.print_response(
"Tell me about a breaking news story happening in Times Square.", stream=True
)
Calls the agent with a sample prompt
Uses
print_response()
to display the outputEnables streaming with
stream=True
to show responses in real-time
x
# Configure the knowledge base with Thai recipe information
knowledge=PDFUrlKnowledgeBase(
# Source PDF containing Thai recipes stored in S3
urls=["https://agno-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
# Vector database configuration for efficient semantic search
vector_db=LanceDb(
uri="tmp/lancedb", # Local storage location for the vector database
table_name="recipe_knowledge", # Name of the table within LanceDB
search_type=SearchType.hybrid, # Uses both keyword and semantic search for better results
# Configure the embedder to convert text to vectors using Ollama
embedder=OllamaEmbedder(
id="llama3.2", # Using Llama 3.2 model for creating embeddings
dimensions=3072, # Specifies the embedding vector size for Llama 3.2
),
),
),
# Add web search capability using DuckDuckGo
tools=[DuckDuckGoTools()], # Allows the agent to search the web for supplementary information
# Additional configuration options
show_tool_calls=True, # Shows when external tools like web search are being used
markdown=True, # Formats responses using markdown for better readability
add_references=True, # Includes references to sources of information in responses
)
# Ensure the knowledge base is loaded before making queries
# This step may be time-consuming on first run as it downloads and processes the PDF
if agent.knowledge is not None:
agent.knowledge.load() # Loads the PDF, extracts text, creates embeddings, and stores in the ve
x
x
The agent would respond to the prompt with a news story about Times Square, using NYC slang, emoji headlines, and ending with a catchy sign-off - all in the style of an enthusiastic local reporter.
Building AI Agents with Personality
Creating a NYC News Reporter with Agno
Workshop Goals
Understand the Agno framework for building AI agents
Learn how to craft effective agent personalities
Build a custom NYC News Reporter agent
Develop skills to create your own unique AI agents
What is Agno?
Open-source framework for building AI agents
Simplifies interaction with various LLM providers
Allows for customization of agent behavior
Supports streaming, memory, and various output formats
Our Example: NYC News Reporter
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Breaking Down the Code
Imports
dedent
: Formats multi-line strings nicelyAgent
: Core Agno class for creating agentsOllama
: Integration with Ollama models
Agent Configuration
Model selection: DeepSeek LLM via Ollama
Instructions define the personality
Markdown formatting enabled
Anatomy of Good Agent Instructions
Clear identity: Define who the agent is
Voice and tone: Specify how the agent should "sound"
Response structure: Guidelines for formatting responses
Special elements: Unique features (emojis, sign-offs)
Boundaries: What the agent should/shouldn't do
Let's Run the Agent!
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True
)
uv run agent-basic.py
How to Create Effective Agent Personalities
Be specific and detailed
Include formatting instructions
Provide examples of desired outputs
Define the agent's knowledge boundaries
Include stylistic elements (catchphrases, quirks)
Workshop Exercise 1
Modify the News Reporter
Change the personality aspects (different city, different news style)
Alter the formatting requirements
Add new stylistic elements
Test with the same prompt to compare results
Workshop Exercise 2
Create Your Own Agent
Choose one:
Food critic
Travel guide
Sports commentator
Movie reviewer
Tech support specialist
Historical tour guide
Design its personality and implement it!
x
# Configure the knowledge base with Thai recipe information
knowledge=PDFUrlKnowledgeBase(
# Source PDF containing Thai recipes stored in S3
urls=["https://agno-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
# Vector database configuration for efficient semantic search
vector_db=LanceDb(
uri="tmp/lancedb", # Local storage location for the vector database
table_name="recipe_knowledge", # Name of the table within LanceDB
search_type=SearchType.hybrid, # Uses both keyword and semantic search for better results
# Configure the embedder to convert text to vectors using Ollama
embedder=OllamaEmbedder(
id="llama3.2", # Using Llama 3.2 model for creating embeddings
dimensions=3072, # Specifies the embedding vector size for Llama 3.2
),
),
),
# Add web search capability using DuckDuckGo
tools=[DuckDuckGoTools()], # Allows the agent to search the web for supplementary information
# Additional configuration options
show_tool_calls=True, # Shows when external tools like web search are being used
markdown=True, # Formats responses using markdown for better readability
add_references=True, # Includes references to sources of information in responses
)
# Ensure the knowledge base is loaded before making queries
# This step may be time-consuming on first run as it downloads and processes the PDF
if agent.knowledge is not None:
agent.knowledge.load() # Loads the PDF, extracts text, creates embeddings, and stores in the vector DB
# Example queries with streaming responses (prints tokens as they're generated)
# Query 1: Request for a specific Thai soup recipe
agent.print_response(
"How do I make chicken and galangal in coconut milk soup", stream=True
)
# Query 2: Request for historical information about Thai curry
agent.print_response("What is the history of Thai curry?", stream=True)
# Query 3: Request for ingredients list for a popular Thai dish
agent.print_response("What ingredients do I need for Pad Thai?", stream=True)
xx
xx
x
This example demonstrates how to design an AI agent with a distinctive journalistic personality. We'll craft a news reporter character that blends authentic New York City attitude with creative storytelling elements.
Agent Description
Our agent will be a spirited NYC news reporter who delivers information with urban flair, cultural references, and a touch of theatrical storytelling that captures the energy of the city.
Example Prompts
Try engaging with the agent using these prompts:
"What's the latest scoop from Central Park?"
"Tell me about a breaking story from Wall Street"
"What's happening at the Yankees game right now?"
"Give me the buzz about a new Broadway show"
Take a look at the .env to set environmental variables.
As everything is being run locally no need for API keys.
from agno.agent import Agent
from agno.models.ollama import OllamaChat # Import OllamaChat for local LLM integration
from agno.playground import Playground, serve_playground_app
from agno.storage.agent.sqlite import SqliteAgentStorage
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.yfinance import YFinanceTools
# Define storage location for agent conversations
agent_storage: str = "tmp/agents.db"
# Configure Ollama model integration
# --------------------------------
# This creates an instance of OllamaChat that connects to a locally running
# Ollama server. The Ollama server must be running separately and have
# the specified model (llama3.2b:latest) already pulled/available.
# No separate Ollama Python client import is needed as Agno's OllamaChat
# handles the API communication directly.
llama_model = OllamaChat(
id="llama3.2b:latest", # Specifies which model to use from Ollama
base_url="http://localhost:11434", # Default URL where Ollama server is running
)
# Web Agent configuration
# ----------------------
# This agent can search the web using DuckDuckGo
web_agent = Agent(
name="Web Agent",
model=llama_model, # Use local Llama model instead of OpenAI
tools=[DuckDuckGoTools()], # Provide web search capabilities
instructions=["Always include sources"], # Custom instruction for the agent
# Persistence configuration
storage=SqliteAgentStorage(table_name="web_agent", db_file=agent_storage),
# Additional agent settings
add_datetime_to_instructions=True, # Adds current date/time context
add_history_to_messages=True, # Includes conversation history
num_history_responses=5, # How many past exchanges to include
markdown=True, # Format responses as markdown
)
# Finance Agent configuration
# --------------------------
# This agent can retrieve financial data using YFinance
finance_agent = Agent(
name="Finance Agent",
model=llama_model, # Use local Llama model instead of OpenAI
# Provide access to various financial data tools
tools=[YFinanceTools(
stock_price=True,
analyst_recommendations=True,
company_info=True,
company_news=True
)],
instructions=["Always use tables to display data"], # Custom instruction
# Use same storage pattern but different table
storage=SqliteAgentStorage(table_name="finance_agent", db_file=agent_storage),
# Same additional settings as web agent
add_datetime_to_instructions=True,
add_history_to_messages=True,
num_history_responses=5,
markdown=True,
)
# Create and configure the playground application
app = Playground(agents=[web_agent, finance_agent]).get_app()
# Run the application when script is executed directly
if __name__ == "__main__":
# Start the playground with hot-reloading enabled for development
serve_playground_app("playground:app", reload=True)
Script Breakdown
Once you've mastered the Agno agent structure, give the Workshop a go ..!
from textwrap import dedent
from agno.agent import Agent, RunResponse # noqa
from agno.models.ollama import Ollama
dedent
fromtextwrap
: Removes common leading whitespace from multi-line strings, making the code more readable while preserving the final output formatAgent
fromagno.agent
: The main class for creating an AI agentRunResponse
(marked with# noqa
to ignore linting warnings): A class for response handlingOllama
fromagno.models.ollama
: A wrapper for accessing Ollama LLM models
Sets the LLM to use Ollama's
deepseek-llm:latest
modelProvides detailed personality instructions for the agent using a multi-line string with
dedent()
Enables markdown formatting with
markdown=True
agent = Agent(
model=Ollama(id="deepseek-llm:latest"),
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
...
"""),
markdown=True,
)
x
The instructions block defines the agent's persona:
An enthusiastic news reporter with NYC attitude
Guidelines for response format (start with emoji headlines)
Style guidance (enthusiastic, concise, entertaining)
Instructions to use NYC references and slang
Direction to end with catchy sign-offs
Reminder to verify facts
Using the Agent
agent.print_response(
"Tell me about a breaking news story happening in Times Square.", stream=True
)
Calls the agent with a sample prompt
Uses
print_response()
to display the outputEnables streaming with
stream=True
to show responses in real-time
The agent would respond to the prompt with a news story about Times Square, using NYC slang, emoji headlines, and ending with a catchy sign-off - all in the style of an enthusiastic local reporter.
Building AI Agents with Personality
Creating a NYC News Reporter with Agno
Workshop Goals
Understand the Agno framework for building AI agents
Learn how to craft effective agent personalities
Build a custom NYC News Reporter agent
Develop skills to create your own unique AI agents
What is Agno?
Open-source framework for building AI agents
Simplifies interaction with various LLM providers
Allows for customization of agent behavior
Supports streaming, memory, and various output formats
Our Example: NYC News Reporter
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Breaking Down the Code
Imports
dedent
: Formats multi-line strings nicelyAgent
: Core Agno class for creating agentsOllama
: Integration with Ollama models
Agent Configuration
Model selection: DeepSeek LLM via Ollama
Instructions define the personality
Markdown formatting enabled
Anatomy of Good Agent Instructions
Clear identity: Define who the agent is
Voice and tone: Specify how the agent should "sound"
Response structure: Guidelines for formatting responses
Special elements: Unique features (emojis, sign-offs)
Boundaries: What the agent should/shouldn't do
Let's Run the Agent!
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True
)
uv run agent-basic.py
How to Create Effective Agent Personalities
Be specific and detailed
Include formatting instructions
Provide examples of desired outputs
Define the agent's knowledge boundaries
Include stylistic elements (catchphrases, quirks)
Workshop Exercise 1
Modify the News Reporter
Change the personality aspects (different city, different news style)
Alter the formatting requirements
Add new stylistic elements
Test with the same prompt to compare results
Workshop Exercise 2
Create Your Own Agent
Choose one:
Food critic
Travel guide
Sports commentator
Movie reviewer
Tech support specialist
Historical tour guide
Design its personality and implement it!
This example demonstrates how to design an AI agent with a distinctive journalistic personality. We'll craft a news reporter character that blends authentic New York City attitude with creative storytelling elements.
Agent Description
Our agent will be a spirited NYC news reporter who delivers information with urban flair, cultural references, and a touch of theatrical storytelling that captures the energy of the city.
Example Prompts
Try engaging with the agent using these prompts:
"What's the latest scoop from Central Park?"
"Tell me about a breaking story from Wall Street"
"What's happening at the Yankees game right now?"
"Give me the buzz about a new Broadway show"
Take a look at the .env to set environmental variables.
As everything is being run locally no need for API keys.
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Script Breakdown
Once you've mastered the Agno agent structure, give the Workshop a go ..!
from textwrap import dedent
from agno.agent import Agent, RunResponse # noqa
from agno.models.ollama import Ollama
dedent
fromtextwrap
: Removes common leading whitespace from multi-line strings, making the code more readable while preserving the final output formatAgent
fromagno.agent
: The main class for creating an AI agentRunResponse
(marked with# noqa
to ignore linting warnings): A class for response handlingOllama
fromagno.models.ollama
: A wrapper for accessing Ollama LLM models
Sets the LLM to use Ollama's
deepseek-llm:latest
modelProvides detailed personality instructions for the agent using a multi-line string with
dedent()
Enables markdown formatting with
markdown=True
agent = Agent(
model=Ollama(id="deepseek-llm:latest"),
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
...
"""),
markdown=True,
)
x
The instructions block defines the agent's persona:
An enthusiastic news reporter with NYC attitude
Guidelines for response format (start with emoji headlines)
Style guidance (enthusiastic, concise, entertaining)
Instructions to use NYC references and slang
Direction to end with catchy sign-offs
Reminder to verify facts
Using the Agent
agent.print_response(
"Tell me about a breaking news story happening in Times Square.", stream=True
)
Calls the agent with a sample prompt
Uses
print_response()
to display the outputEnables streaming with
stream=True
to show responses in real-time
The agent would respond to the prompt with a news story about Times Square, using NYC slang, emoji headlines, and ending with a catchy sign-off - all in the style of an enthusiastic local reporter.
Building AI Agents with Personality
Creating a NYC News Reporter with Agno
Workshop Goals
Understand the Agno framework for building AI agents
Learn how to craft effective agent personalities
Build a custom NYC News Reporter agent
Develop skills to create your own unique AI agents
What is Agno?
Open-source framework for building AI agents
Simplifies interaction with various LLM providers
Allows for customization of agent behavior
Supports streaming, memory, and various output formats
Our Example: NYC News Reporter
# Import the dedent function to format multi-line strings without extra indentation
from textwrap import dedent
# Import necessary components from the agno library
# Agent: Core class for creating AI agents
# RunResponse: Used for handling responses from the agent (marked with noqa to ignore linting)
from agno.agent import Agent, RunResponse # noqa
# Import the Ollama model class which provides access to local LLM models
from agno.models.ollama import Ollama
# Create a new Agent instance with a news reporter personality
# This instantiates our AI agent with specific model and behavior instructions
agent = Agent(
# Specify which model to use - here we're using deepseek-llm from Ollama
# Ollama is a tool for running local LLMs, and deepseek-llm is a specific model available through it
model=Ollama(id="deepseek-llm:latest"),
# Define the agent's personality and behavior using a multi-line string
# dedent() removes the leading indentation from the multi-line string to improve readability
instructions=dedent("""\
You are an enthusiastic news reporter with a flair for storytelling! 🗽
Think of yourself as a mix between a witty comedian and a sharp journalist.
Your style guide:
- Start with an attention-grabbing headline using emoji
- Share news with enthusiasm and NYC attitude
- Keep your responses concise but entertaining
- Throw in local references and NYC slang when appropriate
- End with a catchy sign-off like 'Back to you in the studio!' or 'Reporting live from the Big Apple!'
Remember to verify all facts while keeping that NYC energy high!\
"""),
# Enable markdown formatting in the agent's responses
# This allows for rich text formatting like bold, italics, and headings
markdown=True,
)
# Execute the agent with a test prompt
# This will print the agent's response to the console in real-time (streaming)
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True # Enable streaming mode to see responses character-by-character
)
# A commented-out section containing additional example prompts
# These are suggestions for further testing the agent with different scenarios
"""
Try these fun scenarios:
1. "What's the latest food trend taking over Brooklyn?"
2. "Tell me about a peculiar incident on the subway today"
3. "What's the scoop on the newest rooftop garden in Manhattan?"
4. "Report on an unusual traffic jam caused by escaped zoo animals"
5. "Cover a flash mob wedding proposal at Grand Central"
"""
Breaking Down the Code
Imports
dedent
: Formats multi-line strings nicelyAgent
: Core Agno class for creating agentsOllama
: Integration with Ollama models
Agent Configuration
Model selection: DeepSeek LLM via Ollama
Instructions define the personality
Markdown formatting enabled
Anatomy of Good Agent Instructions
Clear identity: Define who the agent is
Voice and tone: Specify how the agent should "sound"
Response structure: Guidelines for formatting responses
Special elements: Unique features (emojis, sign-offs)
Boundaries: What the agent should/shouldn't do
Let's Run the Agent!
agent.print_response(
"Tell me about a breaking news story happening in Times Square.",
stream=True
)
uv run agent-basic.py
How to Create Effective Agent Personalities
Be specific and detailed
Include formatting instructions
Provide examples of desired outputs
Define the agent's knowledge boundaries
Include stylistic elements (catchphrases, quirks)
Workshop Exercise 1
Modify the News Reporter
Change the personality aspects (different city, different news style)
Alter the formatting requirements
Add new stylistic elements
Test with the same prompt to compare results
Workshop Exercise 2
Create Your Own Agent
Choose one:
Food critic
Travel guide
Sports commentator
Movie reviewer
Tech support specialist
Historical tour guide
Design its personality and implement it!
Last updated