Back to Course|AI Agent Orchestration Mastery: Build, Deploy & Sell Autonomous Systems
Lab

Architecture Design Exercise: Agent System for Content Creators

30 min
Advanced
Unlimited free attempts

Instructions

In this lab, you will design a complete agent orchestration architecture for a content creator who produces blog posts, social media updates, and newsletters. You will think through every layer of the system: from model selection to workflow automation, framework evaluation, and risk mitigation.

This exercise mirrors what real AI engineers do before writing a single line of code. A well-designed architecture prevents costly rewrites and ensures your agent system is reliable, scalable, and safe.

Scenario

Your client is a professional content creator who:

  • Publishes 3 blog posts per week across multiple platforms
  • Manages social media accounts on Twitter/X, LinkedIn, and Instagram
  • Sends a weekly newsletter to 10,000 subscribers
  • Needs help with research, drafting, editing, SEO optimization, and scheduling
  • Wants human review before anything is published
  • Has a monthly budget of $500 for AI tools and APIs

Your job is to design an agent orchestration system that automates their content pipeline while keeping the human in the loop for quality control.

Step 1: System Architecture (architecture.yaml)

Design the overall system architecture by defining:

Agent Definition:

  • Give your agent system a name
  • Define its primary purpose and scope
  • Specify what it should and should NOT do (boundaries)

Model Selection:

  • Choose specific LLM models for different tasks (e.g., research, drafting, editing, SEO)
  • Justify why different tasks might need different models (cost, speed, quality)
  • Include fallback models in case the primary model is unavailable

Communication Channels:

  • Define how the user interacts with the agent (CLI, web dashboard, chat, email)
  • Define how agents communicate with each other (if using multiple agents)
  • Specify notification channels for completed tasks

Tools and Integrations:

  • List external tools the agent needs (search APIs, CMS APIs, social media APIs, email services)
  • For each tool, specify: name, purpose, authentication method, and rate limits
  • Include at least 5 tool integrations

Step 2: Content Creation Workflow (workflow.yaml)

Design the end-to-end content creation workflow:

Workflow Steps:

  • Define each step from initial idea to published content
  • For each step, specify: step name, description, agent or human responsibility, inputs, outputs, estimated duration
  • Include at least 6 steps in the workflow

Automation vs. Human Review:

  • Clearly mark which steps are fully automated, which require human approval, and which are hybrid
  • Define the approval gates (what triggers human review)
  • Specify what happens if the human rejects the output at any gate

Error Handling:

  • Define what happens when a step fails (retry logic, fallback behavior, notification)
  • Specify timeout values for each step
  • Define escalation paths for critical failures

Step 3: Framework Evaluation (evaluation.yaml)

Compare three real agent orchestration frameworks to determine which best fits this use case:

Frameworks to Compare:

  • Choose 3 frameworks from: LangGraph, CrewAI, OpenAI Agents SDK, AutoGen, Semantic Kernel, or any other well-known framework
  • For each framework, provide a brief description

Evaluation Criteria:

  • Model support (which LLM providers are supported)
  • Tool ecosystem (built-in tools, custom tool support, MCP compatibility)
  • Memory and state management (conversation history, long-term memory, checkpointing)
  • Security features (sandboxing, input validation, output filtering)
  • Community and ecosystem (documentation quality, community size, update frequency)
  • Cost and licensing (open source vs. commercial, hosting costs)

Scoring:

  • Score each framework 1-5 on each criterion
  • Provide a brief justification for each score
  • Select a winner with a written rationale

Step 4: Risk Assessment (risk-assessment.yaml)

Identify risks and define mitigation strategies:

Risk Categories:

  • Hallucination risks (agent generates false information in published content)
  • Context window limits (long blog posts exceed model context)
  • Tool failures (API rate limits, service outages, authentication expiry)
  • Cost overruns (unexpected API usage spikes)
  • Security risks (prompt injection, data leakage, unauthorized publishing)

For Each Risk:

  • Describe the risk scenario
  • Rate severity (low, medium, high, critical)
  • Rate likelihood (unlikely, possible, likely, very likely)
  • Define at least one mitigation strategy
  • Define a monitoring approach (how you detect this risk in production)

Include at least 6 distinct risks across the categories above.

What to Submit

The editor has 4 file sections with TODO comments. Replace each TODO with your YAML content. The AI grader will evaluate each section against the rubric.

Hints

  • For model selection, consider cost-quality tradeoffs: use cheaper models for simple tasks (SEO keywords, scheduling) and premium models for creative writing
  • For workflow design, think about the content pipeline as a DAG (directed acyclic graph) where some steps can run in parallel
  • For framework evaluation, focus on real capabilities you can verify from official documentation
  • For risk assessment, think about what happens at 3 AM when no human is watching the system

Grading Rubric

architecture.yaml defines a named agent with clear purpose and boundaries, selects appropriate models for at least 3 task types with fallbacks and justifications, specifies user interaction and agent-to-agent communication channels, and includes at least 5 tool integrations with name, purpose, API, auth method, and rate limits30 points
workflow.yaml defines at least 6 sequential workflow steps with name, description, responsible party, inputs, outputs, and estimated duration for each step. Clearly distinguishes automated steps from human review gates. Includes error handling with retry counts, fallback behavior, and timeout values. Defines approval gates with rejection handling25 points
evaluation.yaml compares 3 real, well-known agent frameworks across at least 5 evaluation criteria. Each framework is scored 1-5 on each criterion with a written justification. Includes a final recommendation identifying the winner with a clear rationale and a runner-up alternative25 points
risk-assessment.yaml identifies at least 6 distinct risks across multiple categories (hallucination, context window, tool failure, cost, security). Each risk includes a descriptive scenario, severity rating, likelihood rating, at least one mitigation strategy with implementation details, and a monitoring approach with alert thresholds20 points

Checklist

0/4

Your Solution

Unlimited free attempts
FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.