Agentic Design Patterns
Reflection Pattern
3 min read
The reflection pattern is one of the most powerful techniques in agentic AI. It enables an agent to critique its own output and iteratively improve it—much like how a human writer revises their draft.
What is Reflection?
Reflection is when an AI agent evaluates its own response, identifies weaknesses, and generates an improved version. This self-critique loop can dramatically improve output quality.
# Basic reflection pattern
def reflect_and_improve(initial_response, task):
critique = llm.generate(f"""
Review this response for the task: {task}
Response: {initial_response}
Identify:
1. What's missing?
2. What could be clearer?
3. Any factual issues?
""")
improved = llm.generate(f"""
Original: {initial_response}
Critique: {critique}
Generate an improved response addressing the critique.
""")
return improved
Why Reflection Works
Research shows that LLMs often produce better results when asked to:
- Self-evaluate before finalizing
- Consider alternatives they might have missed
- Check for errors in reasoning or facts
Real-World Applications
| Use Case | Reflection Approach |
|---|---|
| Code generation | Review for bugs, edge cases, best practices |
| Content writing | Check tone, accuracy, completeness |
| Analysis | Verify assumptions, consider counterarguments |
| Problem solving | Validate logic, explore alternatives |
Key Considerations
- Iteration limit: Set a maximum number of reflection cycles (2-3 is often optimal)
- Stopping criteria: Define when improvement is "good enough"
- Cost awareness: Each reflection cycle consumes tokens
Next, we'll explore how agents extend their capabilities through tool use. :::