Agent Frameworks Overview
LangChain & LangGraph
4 min read
LangChain has become the most popular framework for building LLM applications, with over 120,000 GitHub stars and a massive community. LangGraph extends it with stateful, multi-actor workflows.
LangChain: The Foundation
LangChain provides building blocks for LLM applications:
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
# Initialize the model
llm = ChatOpenAI(model="gpt-4")
# Define tools
tools = [search_tool, calculator_tool]
# Create the agent
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful research assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
# Run the agent
result = executor.invoke({"input": "What's the population of Tokyo?"})
Key LangChain Concepts
| Concept | Description | Use Case |
|---|---|---|
| Chains | Sequences of operations | Linear workflows |
| Agents | Dynamic decision-makers | Complex tasks |
| Tools | External capabilities | Search, compute, APIs |
| Memory | Conversation history | Chatbots, assistants |
| Retrievers | Document fetching | RAG applications |
LangGraph: Stateful Workflows
LangGraph adds graph-based state management for complex agent systems:
from langgraph.graph import StateGraph, END
from typing import TypedDict
class AgentState(TypedDict):
messages: list
current_step: str
# Define the graph
workflow = StateGraph(AgentState)
# Add nodes (processing steps)
workflow.add_node("research", research_node)
workflow.add_node("analyze", analyze_node)
workflow.add_node("write", write_node)
# Add edges (transitions)
workflow.add_edge("research", "analyze")
workflow.add_conditional_edges(
"analyze",
should_continue,
{"continue": "write", "research_more": "research"}
)
workflow.add_edge("write", END)
# Compile and run
app = workflow.compile()
result = app.invoke({"messages": [], "current_step": "start"})
When to Use LangChain/LangGraph
✅ Good fit:
- Building production LLM applications
- Need extensive tool integrations
- Want a large community and ecosystem
- Require complex state management
⚠️ Consider alternatives when:
- You need minimal dependencies
- Building simple, one-off scripts
- Team unfamiliar with the abstraction style
LangSmith: Observability
LangSmith provides tracing and debugging:
# Enable tracing
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "my-agent-project"
# All agent runs are now traced automatically
Next, we'll explore CrewAI's role-based approach to multi-agent systems. :::