A comprehensive guide to building AI agents with LangGraph, from basic primitives to advanced multi-agent systems.
This tutorial teaches you how to build production-ready AI agents using LangGraph, a framework for stateful, multi-actor applications with LLMs.
What you'll learn:
- Core LangGraph concepts (graphs, nodes, edges, state)
- 5 essential agent patterns used in production
- When to apply each pattern
- How to combine patterns for complex workflows
Tech Stack:
- LangGraph - Agent orchestration framework
- Ollama - Local LLM serving (Qwen2.5:7b recommended)
- LangChain - LLM abstraction layer
brew install ollamaollama pull qwen2.5:7bollama serve# From any lesson directory
pip install -r requirements.txtcd lesson_1_react_agent
python calculator_agent.pyFile: lesson_0_primitives/basic_graph.py
What it teaches:
- Nodes (functions that process state)
- Edges (connections between nodes)
- State (data flowing through the graph)
- Conditional routing
- Checkpointing (conversation memory)
Key takeaway:
Everything in LangGraph is a directed graph. Nodes compute, edges route, state flows.
Example:
graph = StateGraph(State)
graph.add_node("process", process_node)
graph.add_edge(START, "process")
graph.add_edge("process", END)
app = graph.compile()File: lesson_1_react_agent/calculator_agent.py
Pattern: Reasoning + Acting in a loop
What it teaches:
- Tool calling with LLMs
- ReAct loop: Think → Act → Observe → Think...
- Tool binding and execution
- Message-based state
Flow:
User: "What's 5 + 3?"
→ Agent: [calls add(5, 3)]
→ Tool: 8
→ Agent: "The answer is 8"
Key takeaway:
ReAct agents decide which tools to call based on reasoning, then observe results before deciding next action.
Best for:
- Interactive tasks requiring tools
- When the agent needs to adapt based on tool results
- Conversational interfaces
File: lesson_2_multi_step/research_agent.py
Pattern: State-based orchestration with streaming
What it teaches:
- Rich state (beyond messages)
- State accumulation with reducers (
operator.add) - Streaming for real-time updates
- State-in-prompt pattern (vs message history)
- Multiple decision points in workflow
Flow:
Agent (no data) → Search → Agent (has data) → Analyze → Agent (has analysis) → Answer
Key differences from Lesson 1:
- State has structured fields:
search_results,analysis,final_answer - Prompts change based on state (staged workflow)
- Uses
.stream()to show progress in real-time
Key takeaway:
Rich state + staged prompts = more controlled multi-step workflows.
Best for:
- Research and analysis tasks
- When you need to accumulate information across steps
- Showing users progress in real-time
File: lesson_3_planning/planning_agent.py
Pattern: Plan-then-Execute
What it teaches:
- Separating planning from execution
- LLM generates structured plans (JSON)
- Python executes plan steps (not LLM tool calls)
- Sequential step execution
- Plan as state
Flow:
1. Planner: Creates 4-step plan
2. Executor: Executes step 1
3. Executor: Executes step 2
4. Executor: Executes step 3
5. Executor: Executes step 4
6. Finalizer: Summarizes results
Key difference from ReAct:
ReAct: Think → Act → Think → Act → Think...
Plan-Execute: Plan all steps → Execute → Execute → Execute...
Key takeaway:
Planning upfront is more predictable than reactive decision-making. Better for deterministic workflows.
Best for:
- Tasks with clear sequences
- When you want to show users the plan before execution
- Repeatable workflows
- When execution should be deterministic
File: lesson_4_supervisor/supervisor_agent.py
Pattern: Specialized agents coordinated by supervisor
What it teaches:
- Multiple specialized agents (Research, Code, Writer)
- Dynamic routing based on task
- Agent collaboration
- Result synthesis
- Conditional agent selection
Flow:
Supervisor: "This needs CODE agent"
↓
Code Agent: Writes implementation
↓
Supervisor: "Now needs WRITER for docs"
↓
Writer Agent: Creates documentation
↓
Aggregator: Combines code + docs
Key takeaway:
Specialization > monolithic agents. Each agent masters one domain.
Best for:
- Complex tasks requiring different expertise
- When task composition varies by input
- Building modular, maintainable systems
- Scaling agent capabilities
File: lesson_5_reflection/reflection_agent.py
Pattern: Self-critique and iterative improvement
What it teaches:
- Self-evaluation loop
- Critique-based revision
- Quality criteria enforcement
- Iteration limits (cost control)
- Autonomous quality improvement
Flow:
1. Generator: Creates draft
2. Critic: "NEEDS IMPROVEMENT - lacks examples"
3. Generator: Revises with examples
4. Critic: "SATISFACTORY"
5. END
Key takeaway:
Agents can improve their own output through self-critique, no human feedback needed.
Best for:
- Writing and creative tasks
- Code review and improvement
- When quality is subjective
- Autonomous quality assurance
Not ideal for:
- Objective right/wrong answers
- Time-critical operations
- When costs must be minimized
| Pattern | Complexity | LLM Calls | Predictability | Best For |
|---|---|---|---|---|
| ReAct | Low | 2-5 per task | Medium | Tool-using agents, chat |
| Multi-Step | Medium | 3-6 per task | Medium-High | Research, data gathering |
| Planning | Medium | 2 (plan + summary) | High | Workflows, automation |
| Supervisor | High | 3-8 per task | Medium | Complex multi-domain tasks |
| Reflection | Medium | 4-8 per task | Low-Medium | Writing, creative work |
- ✅ Agent needs to use tools dynamically
- ✅ Decisions depend on tool results
- ✅ Conversational interaction
- ✅ Simple tool-calling workflows
Example: Customer support bot, data query assistant
- ✅ Need to accumulate information across steps
- ✅ Want to show users progress
- ✅ Workflow has clear stages (search → analyze → respond)
- ✅ State is more complex than just messages
Example: Research assistant, data analysis pipeline
- ✅ Task has predictable sequence
- ✅ Want to show plan to user before execution
- ✅ Execution should be deterministic
- ✅ You control execution logic (not LLM)
Example: Task automation, workflow orchestration, ETL pipelines
- ✅ Task requires different types of expertise
- ✅ Want modular, maintainable architecture
- ✅ Task composition varies by input
- ✅ Need to scale capabilities independently
Example: Software development assistant (research + code + docs), content creation platform
- ✅ Quality is subjective and iterative
- ✅ Output benefits from self-critique
- ✅ Can afford multiple LLM calls
- ✅ No external feedback available
Example: Essay writing, code optimization, creative content
Patterns can be composed for powerful hybrid systems:
Agent uses tools AND critiques its own responses.
Think → Act → Observe → Critique → Revise → Answer
Supervisor creates plan, then routes each step to specialist agents.
Supervisor plans → Research agent (step 1) → Code agent (step 2) → Writer agent (step 3)
Research workflow with quality assurance.
Search → Analyze → Draft → Critique → Revise → Finalize
- Message-based: List of messages (Lesson 1)
- Rich state: Structured fields with reducers (Lesson 2-5)
- State flow: Data flows through nodes, gets merged
Annotated[list[str], operator.add] # Accumulates itemsTells LangGraph how to merge state updates.
- Static edges: Always go A → B
- Conditional edges: Function decides next node
- Dynamic routing: Decision based on state
checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)Enables conversation memory and state persistence.
for chunk in app.stream(state):
# Process real-time updatesShow users progress as agent works.
- Tool binding:
llm.bind_tools(tools)for function calling - Temperature: 0 for consistency, 0.7+ for creativity
- Prompts: System messages guide agent behavior
- Run all lessons to see each pattern in action
- Modify prompts to see how agent behavior changes
- Combine patterns to build more sophisticated systems
- Build your own agent using these patterns as templates
- LangGraph Docs: https://langchain-ai.github.io/langgraph/
- LangChain Docs: https://python.langchain.com/
- Ollama: https://ollama.ai/
ollama serveUse qwen2.5:7b or mistral instead of llama3.1:8b
Use smaller quantized models or close other applications
Add iteration limits in routing logic:
if step_count >= 5:
return "end"Happy agent building! 🚀