Multi-Agent Architecture
AutoGen Conversations
Microsoft's AutoGen framework takes a unique approach: agents communicate through conversations, mimicking how humans collaborate in chat. Released in 2023 and rapidly evolving, it's become a go-to for multi-agent systems.
The Conversational Model
Unlike task-based frameworks, AutoGen agents talk to each other in natural language turns.
# AutoGen conversation setup
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Create agents with distinct personalities
assistant = AssistantAgent(
name="Assistant",
system_message="You are a helpful AI assistant.",
model_client=OpenAIChatCompletionClient(model="gpt-4o")
)
user_proxy = UserProxyAgent(
name="User",
human_input_mode="NEVER", # Fully automated
code_execution_config={"work_dir": "workspace"}
)
# Start conversation - agents take turns
user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci numbers."
)
Group Chat: Multiple Agents
AutoGen shines with group conversations where multiple agents collaborate.
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Specialized agents
coder = AssistantAgent(
name="Coder",
system_message="You write Python code. Only output code blocks."
)
reviewer = AssistantAgent(
name="Reviewer",
system_message="You review code for bugs and improvements."
)
executor = UserProxyAgent(
name="Executor",
code_execution_config={"work_dir": "workspace"}
)
# Create group chat
groupchat = RoundRobinGroupChat(
participants=[coder, reviewer, executor],
max_turns=10
)
# Kick off the conversation
result = await groupchat.run(
task="Create a web scraper for news headlines."
)
Speaker Selection Strategies
Who speaks next? AutoGen offers several strategies:
| Strategy | Description | Use Case |
|---|---|---|
auto |
LLM decides next speaker | General conversations |
round_robin |
Agents take turns | Structured workflows |
random |
Random selection | Brainstorming |
manual |
Human selects | Supervised execution |
groupchat = RoundRobinGroupChat(
participants=[coder, reviewer, tester],
max_turns=12
)
Async Conversations
For production, AutoGen supports async execution:
import asyncio
from autogen_agentchat.agents import AssistantAgent
async def run_agent_conversation():
agent1 = AssistantAgent(name="Analyst")
agent2 = AssistantAgent(name="Strategist")
# Async chat
result = await agent1.a_initiate_chat(
agent2,
message="Analyze the market trends for Q1 2025."
)
return result
# Run multiple conversations in parallel
results = await asyncio.gather(
run_agent_conversation(),
run_agent_conversation(),
run_agent_conversation()
)
Termination Conditions
Control when conversations end:
assistant = AssistantAgent(
name="Assistant",
is_termination_msg=lambda x: "TASK_COMPLETE" in x.get("content", ""),
max_consecutive_auto_reply=5 # Safety limit
)
AutoGen vs Other Frameworks
| Feature | AutoGen | CrewAI | LangGraph |
|---|---|---|---|
| Model | Conversations | Tasks/Roles | State graphs |
| Learning curve | Low | Medium | High |
| Flexibility | High | Medium | Very high |
| Debugging | Chat logs | Task traces | State history |
| Best for | Dynamic collaboration | Structured workflows | Complex logic |
Nerd Note: AutoGen's conversation model makes it easy to prototype but can lead to verbose, expensive runs. Set strict termination conditions.
Next module: Handling sessions that outlast a single API call. :::