Lesson 4 of 20

Multi-Agent Architecture

AutoGen Conversations

4 min read

Microsoft's AutoGen framework takes a unique approach: agents communicate through conversations, mimicking how humans collaborate in chat. Released in 2023 and rapidly evolving, it's become a go-to for multi-agent systems.

The Conversational Model

Unlike task-based frameworks, AutoGen agents talk to each other in natural language turns.

# AutoGen conversation setup
from autogen import AssistantAgent, UserProxyAgent

# Create agents with distinct personalities
assistant = AssistantAgent(
    name="Assistant",
    system_message="You are a helpful AI assistant.",
    llm_config={"model": "gpt-4"}
)

user_proxy = UserProxyAgent(
    name="User",
    human_input_mode="NEVER",  # Fully automated
    code_execution_config={"work_dir": "workspace"}
)

# Start conversation - agents take turns
user_proxy.initiate_chat(
    assistant,
    message="Write a Python function to calculate fibonacci numbers."
)

Group Chat: Multiple Agents

AutoGen shines with group conversations where multiple agents collaborate.

from autogen import GroupChat, GroupChatManager

# Specialized agents
coder = AssistantAgent(
    name="Coder",
    system_message="You write Python code. Only output code blocks."
)

reviewer = AssistantAgent(
    name="Reviewer",
    system_message="You review code for bugs and improvements."
)

executor = UserProxyAgent(
    name="Executor",
    code_execution_config={"work_dir": "workspace"}
)

# Create group chat
groupchat = GroupChat(
    agents=[coder, reviewer, executor],
    messages=[],
    max_round=10
)

manager = GroupChatManager(groupchat=groupchat)

# Kick off the conversation
executor.initiate_chat(
    manager,
    message="Create a web scraper for news headlines."
)

Speaker Selection Strategies

Who speaks next? AutoGen offers several strategies:

Strategy Description Use Case
auto LLM decides next speaker General conversations
round_robin Agents take turns Structured workflows
random Random selection Brainstorming
manual Human selects Supervised execution
groupchat = GroupChat(
    agents=[coder, reviewer, tester],
    speaker_selection_method="round_robin",  # Predictable order
    max_round=12
)

Async Conversations

For production, AutoGen supports async execution:

import asyncio
from autogen import AssistantAgent

async def run_agent_conversation():
    agent1 = AssistantAgent(name="Analyst")
    agent2 = AssistantAgent(name="Strategist")

    # Async chat
    result = await agent1.a_initiate_chat(
        agent2,
        message="Analyze the market trends for Q1 2025."
    )
    return result

# Run multiple conversations in parallel
results = await asyncio.gather(
    run_agent_conversation(),
    run_agent_conversation(),
    run_agent_conversation()
)

Termination Conditions

Control when conversations end:

assistant = AssistantAgent(
    name="Assistant",
    is_termination_msg=lambda x: "TASK_COMPLETE" in x.get("content", ""),
    max_consecutive_auto_reply=5  # Safety limit
)

AutoGen vs Other Frameworks

Feature AutoGen CrewAI LangGraph
Model Conversations Tasks/Roles State graphs
Learning curve Low Medium High
Flexibility High Medium Very high
Debugging Chat logs Task traces State history
Best for Dynamic collaboration Structured workflows Complex logic

Nerd Note: AutoGen's conversation model makes it easy to prototype but can lead to verbose, expensive runs. Set strict termination conditions.

Next module: Handling sessions that outlast a single API call. :::

Quiz

Module 1: Multi-Agent Architecture

Take Quiz