This template showcases a ReAct agent implemented using LangGraph, works seamlessly with LangGraph Studio. ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
The core logic, defined in src/react_agent/graph.py
, demonstrates a flexible ReAct agent that iteratively reasons about user queries and executes actions. The template features a modular architecture with shared components in src/common/
, MCP integration for external documentation sources, and comprehensive testing suite.
β Star this repo if you find it helpful! Visit our webinar series for tutorials and advanced LangGraph development techniques.
- Qwen Models: Complete Qwen series support via
langchain-qwq
package, including Qwen-Plus, Qwen-Turbo, QwQ-32B, QvQ-72B - OpenAI: GPT-4o, GPT-4o-mini, etc.
- OpenAI-Compatible: Any provider supporting OpenAI API format via custom API key and base URL
- Anthropic: Claude 4 Sonnet, Claude 3.5 Haiku, etc.
- Model Context Protocol (MCP): Dynamic external tool loading at runtime
- DeepWiki MCP Server: Optional MCP tools for GitHub repository documentation access and Q&A capabilities
- Web Search: Built-in traditional LangChain tools (Tavily) for internet information retrieval
Note
New in LangGraph v0.6: LangGraph Context replaces the traditional config['configurable']
pattern. Runtime context is now passed to the context
argument of invoke/stream
, providing a cleaner and more intuitive way to configure your agents.
- Context-Driven Configuration: Runtime context passed via
context
parameter instead ofconfig['configurable']
- Simplified API: Cleaner interface for passing runtime configuration to your agents
- Backward Compatibility: Gradual migration path from the old configuration pattern
- Local Development Server: Complete LangGraph Platform development environment
- 70+ Test Cases: Unit, integration, and end-to-end testing coverage with complete DeepWiki tool loading and execution testing
- ReAct Loop Validation: Ensures proper tool-model interactions
The ReAct agent:
- Takes a user query as input
- Reasons about the query and decides on an action
- Executes the chosen action using available tools
- Observes the result of the action
- Repeats steps 2-4 until it can provide a final answer
The agent comes with web search capabilities and optional DeepWiki MCP documentation tools, but can be easily extended with custom tools to suit various use cases.
See these LangSmith traces to understand how the agent works in practice:
- DeepWiki Documentation Query - Shows agent using DeepWiki MCP tools to query GitHub repository documentation
- Web Search Query - Demonstrates Tavily web search integration and reasoning loop
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the repository:
git clone https://github.com/webup/langgraph-up-react.git
cd langgraph-up-react
- Install dependencies (including dev dependencies):
uv sync --dev
- Copy the example environment file and fill in essential keys:
cp .env.example .env
-
Edit the
.env
file with your API keys: -
Define required API keys in your
.env
file:
# Required: Web search functionality
TAVILY_API_KEY=your-tavily-api-key
# Required: If using Qwen models (default)
DASHSCOPE_API_KEY=your-dashscope-api-key
# Optional: OpenAI model service platform keys
OPENAI_API_KEY=your-openai-api-key
# Optional: If using OpenAI-compatible service platforms
OPENAI_API_BASE=your-openai-base-url
# Optional: If using Anthropic models
ANTHROPIC_API_KEY=your-anthropic-api-key
# Optional: Regional API support for Qwen models
REGION=international # or 'prc' for China mainland (default)
# Optional: Always enable DeepWiki documentation tools
ENABLE_DEEPWIKI=true
The primary search tool uses Tavily. Create an API key here.
The template uses qwen:qwen-flash
as the default model, defined in src/common/context.py
. You can configure different models in three ways:
- Runtime Context (recommended for programmatic usage)
- Environment Variables
- LangGraph Studio Assistant Configuration
OPENAI_API_KEY=your-openai-api-key
Get your API key: OpenAI Platform
ANTHROPIC_API_KEY=your-anthropic-api-key
Get your API key: Anthropic Console
DASHSCOPE_API_KEY=your-dashscope-api-key
REGION=international # or 'prc' for China mainland
Get your API key: DashScope Console
OPENAI_API_KEY=your-provider-api-key
OPENAI_API_BASE=https://your-provider-api-base-url/v1
Supports SiliconFlow, Together AI, Groq, and other OpenAI-compatible APIs.
Extend the agent's capabilities by adding tools in src/common/tools.py
:
async def my_custom_tool(input: str) -> str:
"""Your custom tool implementation."""
return "Tool result"
# Add to the tools list in get_tools()
Integrate external MCP servers for additional capabilities:
- Configure MCP Server in
src/common/mcp.py
:
MCP_SERVERS = {
"deepwiki": {
"url": "https://mcp.deepwiki.com/mcp",
"transport": "streamable_http",
},
# Example: Context7 for library documentation
"context7": {
"url": "https://mcp.context7.com/sse",
"transport": "sse",
},
}
- Add Server Function:
async def get_context7_tools() -> List[Callable[..., Any]]:
"""Get Context7 documentation tools."""
return await get_mcp_tools("context7")
- Enable in Context - Add context flag and load tools in
get_tools()
function:
# In src/common/tools.py
if context.enable_context7:
tools.extend(await get_context7_tools())
Tip
Context7 Example: The MCP configuration already includes a commented Context7 server setup. Context7 provides up-to-date library documentation and examples - simply uncomment the configuration and add the context flag to enable it.
Use the new LangGraph v0.6 context parameter to configure models at runtime:
from common.context import Context
from react_agent import graph
# Configure model via context
result = await graph.ainvoke(
{"messages": [("user", "Your question here")]},
context=Context(model="openai:gpt-4o-mini")
)
Set the MODEL
environment variable in your .env
file:
MODEL=anthropic:claude-3.5-haiku
In LangGraph Studio, configure models through Assistant management. Create or update assistants with different model configurations for easy switching between setups.
Model String Format: provider:model-name
(follows LangChain init_chat_model
naming convention)
# OpenAI models
"openai:gpt-4o-mini"
"openai:gpt-4o"
# Qwen models (with regional support)
"qwen:qwen-flash" # Default model
"qwen:qwen-plus" # Balanced performance
"qwen:qwq-32b-preview" # Reasoning model
"qwen:qvq-72b-preview" # Multimodal reasoning
# Anthropic models
"anthropic:claude-4-sonnet"
"anthropic:claude-3.5-haiku"
Update the system prompt in src/common/prompts.py
or via the LangGraph Studio interface.
Adjust the ReAct loop in src/react_agent/graph.py
:
- Add new graph nodes
- Modify conditional routing logic
- Add interrupts or human-in-the-loop interactions
Runtime configuration is managed in src/common/context.py
:
- Model selection
- Search result limits
- Tool toggles
make dev # Start LangGraph development server (uv run langgraph dev --no-browser)
make dev_ui # Start with LangGraph Studio Web UI in browser
make test # Run unit and integration tests (default)
make test_unit # Run unit tests only
make test_integration # Run integration tests
make test_e2e # Run end-to-end tests (requires running server)
make test_all # Run all test suites
make lint # Run linters (ruff + mypy)
make format # Auto-format code
make lint_tests # Lint test files only
- Hot Reload: Local changes automatically applied
- State Editing: Edit past state and rerun from specific points
- Thread Management: Create new threads or continue existing conversations
- LangSmith Integration: Detailed tracing and collaboration
The template uses a modular architecture:
src/react_agent/
: Core agent graph and state managementsrc/common/
: Shared components (context, models, tools, prompts, MCP integration)tests/
: Comprehensive test suite with fixtures and MCP integration coveragelanggraph.json
: LangGraph Agent basic configuration settings
Key components:
src/common/mcp.py
: MCP client management for external documentation sources- Dynamic tool loading: Runtime tool selection based on context configuration
- Context system: Centralized configuration with environment variable support
This structure supports multiple agents and easy component reuse across different implementations.
- π ROADMAP.md - Current milestones and future plans
- π Issues & PRs Welcome - Help us improve by raising issues or submitting pull requests
- π€ Built with Claude Code - This template is actively developed using Claude Code
We encourage community contributions! Whether it's:
- Reporting bugs or suggesting features
- Adding new tools or model integrations
- Improving documentation
- Sharing your use cases and templates
Check out our roadmap to see what we're working on next and how you can contribute.
- LangGraph Documentation - Framework guides and examples
- LangSmith - Tracing and collaboration platform
- ReAct Paper - Original research on reasoning and acting
- Claude Code - AI-powered development environment