Top 5 Open Source Frameworks for building AI Agents with examples
![Top 5 Open Source Frameworks for building AI Agents with examples](/content/images/size/w1200/2025/02/Blog-cover-template--25-.png)
AI agents are changing the way we interact with technology, automating tasks, making decisions, and even collaborating with humans. But building powerful AI agents from scratch can be complex.
Luckily, open-source frameworks make it easier, offering tools and ready-to-use structures to develop smart, interactive agents. In this article, we’ll explore the top 5 open-source frameworks for building AI agents, complete with examples to show how they work in action.
1. Phidata: The AI Engineering Streamliner
Phidata (now Agno) is a lightweight framework for building multi-modal Agents. It allows developers to convert LLMs into agents for AI products, supporting both closed and open LLMs from providers like OpenAI, Anthropic, and others.
Phidata's architecture is centered around building agents with memory, knowledge, tools, and reasoning capabilities, as well as teams of agents that can work together.
Strengths:
- Memory and Knowledge Management: Phidata remembers the past! It keeps track of conversations and info, so your AI agent isn't starting from scratch every time.
- Multi-Agent Orchestration: Got a team of AI helpers? Phidata manages them, so they play nice and work together on complex tasks.
- Built-in Agent UI: Quick, easy testing is key! Phidata's got a user-friendly interface for checking out how your agent's doing.
- Deployment and Monitoring: Keep an eye on things. Phidata helps you deploy your agent and track how it's performing in the real world.
Example Agent: Github Readme Writer (An Agent which takes in repo link and write readme by understanding the code)
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.github import GithubTools
from agno.tools.local_file_system import LocalFileSystemTools
readme_gen_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
name="Readme Generator Agent",
tools=[GithubTools(), LocalFileSystemTools()],
markdown=True,
debug_mode=True,
instructions=[
"You are readme generator agent",
"You'll be given repository url or repository name from user."
"You'll use the `get_repository` tool to get the repository details."
"You have to pass the repo_name as argument to the tool. It should be in the format of owner/repo_name. If given url extract owner/repo_name from it."
"Also call the `get_repository_languages` tool to get the languages used in the repository."
"Write a useful README for a open source project, including how to clone and install the project, run the project etc. Also add badges for the license, size of the repo, etc"
"Don't include the project's languages-used in the README"
"Write the produced README to the local filesystem",
],
)
readme_gen_agent.print_response(
"Get details of https://github.com/agno-agi/agno", markdown=True
)
Find more Agents built using Phidata along with code here
2. AutoGen: Conversational AI Agents
AutoGen, built by Microsoft is a framework designed for building AI applications using multiple agents that can converse with each other to solve tasks. It focuses on enabling customizable and conversational AI agents that can support complex workflows through collaboration.
Strengths:
- Scalable & Distributed: AutoGen enables seamless deployment of large-scale, distributed agent networks across various environments.
- Robust Debugging: Built-in tools for tracking, tracing, and debugging ensure reliable agent workflows, with OpenTelemetry support for observability.
Example Agent: Restructuring a Raw Note into a Document with Summary and To-Do List
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an environment variable or a file
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# Create the agents
summarizer = AssistantAgent(
name="Summarizer",
llm_config={"config_list": config_list, "model": "gpt-3.5-turbo"} #Or whatever model you want
)
title_generator = AssistantAgent(
name="TitleGenerator",
llm_config={"config_list": config_list, "model": "gpt-3.5-turbo"}
)
todo_extractor = AssistantAgent(
name="ToDoExtractor",
llm_config={"config_list": config_list, "model": "gpt-3.5-turbo"}
)
user_proxy = UserProxyAgent(
name="User_Proxy",
code_execution_config={"work_dir": "coding"},
human_input_mode="NEVER", #Set this to ALWAYS if you want to manually approve every message
max_consecutive_auto_reply=10
)
#Start the process
user_proxy.initiate_chat(
summarizer,
message="Please summarize this note: [Your Raw Note Here]"
)
#Example of extracting the summary from the conversation history
summary = user_proxy.last_message()
user_proxy.initiate_chat(
title_generator,
message=f"Please generate a concise title for this summary: {summary}"
)
title = user_proxy.last_message()
user_proxy.initiate_chat(
todo_extractor,
message=f"Please generate a To Do List based on this note: [Your Raw Note Here]"
)
todo_list = user_proxy.last_message()
print ("Summary: " + summary)
print ("Title: " + title)
print ("To Do List:" + todo_list)
Find more Agents built using Autogen along with code here
3. CrewAI: Orchestrating Agent Teams
CrewAI is an open-source framework focused on multi-agent orchestration, enabling AI agents to collaborate on tasks with defined roles and shared objectives. It is designed for scenarios that require teamwork among agents.
Strengths:
- Role-Based Agent Collaboration: CrewAI enables the creation of AI agents with defined roles and goals, facilitating natural, autonomous decision-making and dynamic task delegation.
- Production-Ready Framework: Built for reliability and scalability, CrewAI is suitable for real-world applications, providing precise control and deep customization for complex business challenges.
Example Agent: AI-Powered Stock Analysis for Finance Teams
import os
from crewai import Crew, Agent, Task
from dotenv import load_dotenv
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents import load_tools
load_dotenv()
# Replace with your actual API keys and model configuration
# You might use OpenAI, Gemini, or another LLM provider
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" # Or load it from your .env file
llm_model = "gpt-4" # Or "gpt-3.5-turbo", etc. Choose based on your budget and needs.
# 1. Tools (Example: DuckDuckGo for search)
search_tool = DuckDuckGoSearchRun()
#Alternative method:
#tools = load_tools(["ddg-search", "wikipedia"])
# 2. Define the Agents
data_gatherer = Agent(
role='Data Gathering Expert',
goal='Collect comprehensive data for stock analysis using web scraping and financial APIs',
backstory="""You are an expert data gatherer with 5 years of experience. You collect data via web scraping and using any appropriate tools.""",
verbose=True,
allow_delegation=False,
tools = [search_tool],
llm=llm_model
)
financial_analyst = Agent(
role='Financial Expert',
goal='Analyze financial data and identify investment opportunities, looking for key trends and ratios',
backstory="""You are an expert financial analyst with 10 years of experience. You are skilled in analyzing financial data and identifying promising investment opportunities.""",
verbose=True,
allow_delegation=True, #Allows this agent to delegate tasks.
tools = [search_tool],
llm=llm_model
)
report_generator = Agent(
role='Report Writer',
goal='Generate clear and concise investment reports, summarizing findings and recommendations',
backstory="""You are a skilled report writer with a background in finance. You are excellent at communicating complex financial information in an accessible manner.""",
verbose=True,
allow_delegation=False,
llm=llm_model
)
# 3. Define the Tasks
task1 = Task(
description="""Gather the latest market data, including stock prices, trading volumes, and relevant news articles, for Tesla (TSLA).
Focus on data from the last month. Provide direct URLs to the data sources.""",
agent=data_gatherer
)
task2 = Task(
description="""Analyze the gathered data for Tesla (TSLA), identify key trends, compute relevant financial ratios (e.g., P/E ratio, debt-to-equity ratio),
and assess potential investment opportunities and risks. Include a bulleted list of key findings.""",
agent=financial_analyst
)
task3 = Task(
description="""Compile the analysis into a structured report with key findings, recommendations, and supporting data.
The report should be well-formatted and easy to understand for a finance team. Include a section on potential risks and mitigation strategies.""",
agent=report_generator
)
# 4. Form the Crew
stock_analysis_crew = Crew(
agents=[data_gatherer, financial_analyst, report_generator],
tasks=[task1, task2, task3],
verbose=2, # Show what tasks are being worked on
process=Process.sequential # Sequential task execution
)
# 5. Kickoff the Crew
report = stock_analysis_crew.kickoff()
print("Generated Report:")
print(report)
Find more Agents built using Crew AI along with code here
4. LangGraph: Building Complex AI Workflows
LangGraph is an open-source framework designed to build stateful, multi-agent applications using Large Language Models (LLMs). It structures workflows as graphs, where each node represents a specific task or function, allowing for fine-grained control over the flow and state of applications.
Strengths:
- Graph-Based Workflows: LangGraph structures tasks as nodes in a graph, enabling flexible decision-making and iterative processes.
- Stateful Agents: Agents retain context and memory across tasks, making multi-step interactions seamless.
- Precise Control: Developers get fine-tuned control over agent behavior and workflows for custom solutions.
- Seamless Integration: Works effortlessly with LangChain and LangSmith for enhanced tools, monitoring, and optimization.
Example Agent: Automated Blog Post Creation (A two-agent system where one agent generates a detailed outline based on a topic, and the second agent writes the complete blog post content from that outline, demonstrating a simple content generation pipeline)
import os
from typing import TypedDict, List, Dict, Any
from langchain_core.messages import BaseMessage, SystemMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import chain
import langgraph.graph as lgraph
from langchain_openai import ChatOpenAI # Import OpenAI Chat model
# 1. Define the State:
class GraphState(TypedDict):
"""
Represents the state of our graph.
Attributes:
keys: Dictionary where we can store arbitrary values relevant to
our graph.
"""
keys: Dict[str, Any]
# 2. Define Nodes (Functions/Agents):
#Load API key
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" #Replace with your API Key or load it from environment variables
#Configure LLM - replace with your LLM or configuration
your_llm = ChatOpenAI(temperature=0.7)
def generate_outline(state: GraphState):
"""Generates a blog post outline."""
topic = state["keys"]["topic"]
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert blog post outline generator. Given a topic, you create a detailed and well-structured outline."),
("human", "Please generate a detailed outline for a blog post on the topic: {topic}")
])
chain = prompt | your_llm
outline = chain.invoke({"topic": topic})
return {"keys": {"outline": outline.content}} #Return the extracted text from the LLM's result
def write_content(state: GraphState):
"""Writes the blog post content based on the outline."""
outline = state["keys"]["outline"]
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert blog post writer. Given an outline, you write high-quality, engaging, and informative content."),
("human", "Please write a blog post based on the following outline:\n{outline}")
])
chain = prompt | your_llm
content = chain.invoke({"outline": outline})
return {"keys": {"content": content.content}} #Return the extracted text from the LLM's result
# 3. Define the Graph:
workflow = lgraph.GraphState(GraphState)
workflow.add_node("generate_outline", generate_outline)
workflow.add_node("write_content", write_content)
workflow.set_entry_point("generate_outline")
workflow.add_edge("generate_outline", "write_content")
# Compile the graph
app = workflow.compile()
# 4. Run the Workflow:
# Input topic
topic = "The Future of AI in Education"
# Initial state
state = {"keys": {"topic": topic}}
# Run the graph
result = app.invoke(state)
# Print the results
print("Blog Post Outline:\n", result['keys']['outline'])
print("\nBlog Post Content:\n", result['keys']['content'])
Find more Agents built using LangGraph along with code here
5. OpenAI Swarm: Lightweight Agent Coordination
OpenAI's Swarm is an experimental, open-source framework designed to help developers orchestrate multi-agent AI systems, focusing on lightweight coordination and making agent interaction easily controllable and testable.
Strengths:
- Lightweight Coordination: Quick and efficient. Swarm is designed for speedy agent coordination without unnecessary overhead.
- Controllability: This framework gives you the power to easily test and customize how agents interact.
- Client-Side Operation: Operating almost entirely on the client side, Swarm offers developers greater control over system behavior and state management, enhancing predictability and ease of testing.
Example Agent: Triage Agent that directs user requests to either a Sales Agent or a Refunds Agent based on the user's input.
from swarm import Swarm, Agent
# Initialize the Swarm client
client = Swarm()
# Define the Refunds Agent
def process_refund(item_id, reason="NOT SPECIFIED"):
"""Refund an item. Ensure you have the item_id and ask for user confirmation before processing."""
print(f"[mock] Refunding item {item_id} because {reason}...")
return "Success!"
def apply_discount():
"""Apply a discount to the user's cart."""
print("[mock] Applying discount...")
return "Applied discount of 10%"
refunds_agent = Agent(
name="Refunds Agent",
instructions="Assist the user with refunds. If the reason is that it was too expensive, offer a discount. If they insist, then process the refund.",
functions=[process_refund, apply_discount],
)
# Define the Sales Agent
sales_agent = Agent(
name="Sales Agent",
instructions="Be super enthusiastic about selling our products.",
)
# Define functions to transfer control between agents
def transfer_to_sales():
return sales_agent
def transfer_to_refunds():
return refunds_agent
def transfer_back_to_triage():
return triage_agent
# Define the Triage Agent
triage_agent = Agent(
name="Triage Agent",
instructions="Determine which agent is best suited to handle the user's request, and transfer the conversation to that agent.",
functions=[transfer_to_sales, transfer_to_refunds],
)
# Add transfer function to Sales and Refunds agents
sales_agent.functions.append(transfer_back_to_triage)
refunds_agent.functions.append(transfer_back_to_triage)
# User message
messages = [{"role": "user", "content": "I would like a refund for my recent purchase."}]
# Run the Swarm client starting with the Triage Agent
response = client.run(agent=triage_agent, messages=messages)
# Print the response from the active agent
print(response.messages[-1]["content"])
Find more Agents built using Swarm along with code here
Conclusion
The five frameworks discussed in this article offer a range of capabilities for building AI agents, each with its strengths. Phidata excels in memory management and multi-agent orchestration, while AutoGen focuses on conversational AI and workflow support. CrewAI is designed for multi-agent collaboration and role definition, and LangGraph simplifies the creation of complex AI workflows. OpenAI Swarm provides a lightweight framework for agent coordination.
Choosing the right framework depends on the specific requirements of the AI agent application. Developers should consider factors such as the complexity of the task, the need for memory and knowledge management, the importance of multi-agent collaboration, and the desired level of control and customization.
Looking to streamline your AI development? Explore Athina AI — the ideal platform for building, testing, and monitoring AI features tailored to your needs.