Skip to content

JBaczuk/LangGraph-Tutorial

Repository files navigation

LangGraph Tutorial: 6 Core Agent Patterns

A comprehensive guide to building AI agents with LangGraph, from basic primitives to advanced multi-agent systems.


📚 Table of Contents


Overview

This tutorial teaches you how to build production-ready AI agents using LangGraph, a framework for stateful, multi-actor applications with LLMs.

What you'll learn:

  • Core LangGraph concepts (graphs, nodes, edges, state)
  • 5 essential agent patterns used in production
  • When to apply each pattern
  • How to combine patterns for complex workflows

Tech Stack:

  • LangGraph - Agent orchestration framework
  • Ollama - Local LLM serving (Qwen2.5:7b recommended)
  • LangChain - LLM abstraction layer

Setup

1. Install Ollama

brew install ollama

2. Pull a model (Qwen2.5 recommended for tool calling)

ollama pull qwen2.5:7b

3. Start Ollama server

ollama serve

4. Install Python dependencies

# From any lesson directory
pip install -r requirements.txt

5. Run a lesson

cd lesson_1_react_agent
python calculator_agent.py

Lessons

Lesson 0: Primitives

File: lesson_0_primitives/basic_graph.py

What it teaches:

  • Nodes (functions that process state)
  • Edges (connections between nodes)
  • State (data flowing through the graph)
  • Conditional routing
  • Checkpointing (conversation memory)

Key takeaway:

Everything in LangGraph is a directed graph. Nodes compute, edges route, state flows.

Example:

graph = StateGraph(State)
graph.add_node("process", process_node)
graph.add_edge(START, "process")
graph.add_edge("process", END)
app = graph.compile()

Lesson 1: ReAct Agent

File: lesson_1_react_agent/calculator_agent.py

Pattern: Reasoning + Acting in a loop

What it teaches:

  • Tool calling with LLMs
  • ReAct loop: Think → Act → Observe → Think...
  • Tool binding and execution
  • Message-based state

Flow:

User: "What's 5 + 3?"
→ Agent: [calls add(5, 3)]
→ Tool: 8
→ Agent: "The answer is 8"

Key takeaway:

ReAct agents decide which tools to call based on reasoning, then observe results before deciding next action.

Best for:

  • Interactive tasks requiring tools
  • When the agent needs to adapt based on tool results
  • Conversational interfaces

Lesson 2: Multi-Step Reasoning

File: lesson_2_multi_step/research_agent.py

Pattern: State-based orchestration with streaming

What it teaches:

  • Rich state (beyond messages)
  • State accumulation with reducers (operator.add)
  • Streaming for real-time updates
  • State-in-prompt pattern (vs message history)
  • Multiple decision points in workflow

Flow:

Agent (no data) → Search → Agent (has data) → Analyze → Agent (has analysis) → Answer

Key differences from Lesson 1:

  • State has structured fields: search_results, analysis, final_answer
  • Prompts change based on state (staged workflow)
  • Uses .stream() to show progress in real-time

Key takeaway:

Rich state + staged prompts = more controlled multi-step workflows.

Best for:

  • Research and analysis tasks
  • When you need to accumulate information across steps
  • Showing users progress in real-time

Lesson 3: Planning Agent

File: lesson_3_planning/planning_agent.py

Pattern: Plan-then-Execute

What it teaches:

  • Separating planning from execution
  • LLM generates structured plans (JSON)
  • Python executes plan steps (not LLM tool calls)
  • Sequential step execution
  • Plan as state

Flow:

1. Planner: Creates 4-step plan
2. Executor: Executes step 1
3. Executor: Executes step 2
4. Executor: Executes step 3
5. Executor: Executes step 4
6. Finalizer: Summarizes results

Key difference from ReAct:

ReAct:        Think → Act → Think → Act → Think...
Plan-Execute: Plan all steps → Execute → Execute → Execute...

Key takeaway:

Planning upfront is more predictable than reactive decision-making. Better for deterministic workflows.

Best for:

  • Tasks with clear sequences
  • When you want to show users the plan before execution
  • Repeatable workflows
  • When execution should be deterministic

Lesson 4: Supervisor Multi-Agent

File: lesson_4_supervisor/supervisor_agent.py

Pattern: Specialized agents coordinated by supervisor

What it teaches:

  • Multiple specialized agents (Research, Code, Writer)
  • Dynamic routing based on task
  • Agent collaboration
  • Result synthesis
  • Conditional agent selection

Flow:

Supervisor: "This needs CODE agent"
  ↓
Code Agent: Writes implementation
  ↓
Supervisor: "Now needs WRITER for docs"
  ↓
Writer Agent: Creates documentation
  ↓
Aggregator: Combines code + docs

Key takeaway:

Specialization > monolithic agents. Each agent masters one domain.

Best for:

  • Complex tasks requiring different expertise
  • When task composition varies by input
  • Building modular, maintainable systems
  • Scaling agent capabilities

Lesson 5: Reflection Agent

File: lesson_5_reflection/reflection_agent.py

Pattern: Self-critique and iterative improvement

What it teaches:

  • Self-evaluation loop
  • Critique-based revision
  • Quality criteria enforcement
  • Iteration limits (cost control)
  • Autonomous quality improvement

Flow:

1. Generator: Creates draft
2. Critic: "NEEDS IMPROVEMENT - lacks examples"
3. Generator: Revises with examples
4. Critic: "SATISFACTORY"
5. END

Key takeaway:

Agents can improve their own output through self-critique, no human feedback needed.

Best for:

  • Writing and creative tasks
  • Code review and improvement
  • When quality is subjective
  • Autonomous quality assurance

Not ideal for:

  • Objective right/wrong answers
  • Time-critical operations
  • When costs must be minimized

Pattern Comparison

Pattern Complexity LLM Calls Predictability Best For
ReAct Low 2-5 per task Medium Tool-using agents, chat
Multi-Step Medium 3-6 per task Medium-High Research, data gathering
Planning Medium 2 (plan + summary) High Workflows, automation
Supervisor High 3-8 per task Medium Complex multi-domain tasks
Reflection Medium 4-8 per task Low-Medium Writing, creative work

When to Use Each Pattern

Use ReAct when:

  • ✅ Agent needs to use tools dynamically
  • ✅ Decisions depend on tool results
  • ✅ Conversational interaction
  • ✅ Simple tool-calling workflows

Example: Customer support bot, data query assistant


Use Multi-Step Reasoning when:

  • ✅ Need to accumulate information across steps
  • ✅ Want to show users progress
  • ✅ Workflow has clear stages (search → analyze → respond)
  • ✅ State is more complex than just messages

Example: Research assistant, data analysis pipeline


Use Planning when:

  • ✅ Task has predictable sequence
  • ✅ Want to show plan to user before execution
  • ✅ Execution should be deterministic
  • ✅ You control execution logic (not LLM)

Example: Task automation, workflow orchestration, ETL pipelines


Use Supervisor Multi-Agent when:

  • ✅ Task requires different types of expertise
  • ✅ Want modular, maintainable architecture
  • ✅ Task composition varies by input
  • ✅ Need to scale capabilities independently

Example: Software development assistant (research + code + docs), content creation platform


Use Reflection when:

  • ✅ Quality is subjective and iterative
  • ✅ Output benefits from self-critique
  • ✅ Can afford multiple LLM calls
  • ✅ No external feedback available

Example: Essay writing, code optimization, creative content


Combining Patterns

Patterns can be composed for powerful hybrid systems:

ReAct + Reflection

Agent uses tools AND critiques its own responses.

Think → Act → Observe → Critique → Revise → Answer

Planning + Supervisor

Supervisor creates plan, then routes each step to specialist agents.

Supervisor plans → Research agent (step 1) → Code agent (step 2) → Writer agent (step 3)

Multi-Step + Reflection

Research workflow with quality assurance.

Search → Analyze → Draft → Critique → Revise → Finalize

Key Concepts Summary

State

  • Message-based: List of messages (Lesson 1)
  • Rich state: Structured fields with reducers (Lesson 2-5)
  • State flow: Data flows through nodes, gets merged

Reducers

Annotated[list[str], operator.add]  # Accumulates items

Tells LangGraph how to merge state updates.

Routing

  • Static edges: Always go A → B
  • Conditional edges: Function decides next node
  • Dynamic routing: Decision based on state

Checkpointing

checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)

Enables conversation memory and state persistence.

Streaming

for chunk in app.stream(state):
    # Process real-time updates

Show users progress as agent works.

LLM Integration

  • Tool binding: llm.bind_tools(tools) for function calling
  • Temperature: 0 for consistency, 0.7+ for creativity
  • Prompts: System messages guide agent behavior

Next Steps

  1. Run all lessons to see each pattern in action
  2. Modify prompts to see how agent behavior changes
  3. Combine patterns to build more sophisticated systems
  4. Build your own agent using these patterns as templates

Resources


Troubleshooting

"Ollama not responding"

ollama serve

"Model doesn't support tool calling"

Use qwen2.5:7b or mistral instead of llama3.1:8b

"Out of memory"

Use smaller quantized models or close other applications

"Agent loops infinitely"

Add iteration limits in routing logic:

if step_count >= 5:
    return "end"

Happy agent building! 🚀

About

LangGraph and A.I. Agent Tutorial

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages