A production-ready template for building enterprise-grade FastAPI applications with LangGraph integration, featuring streaming conversational AI agents with thread-based persistence and conversation management.
- Thread-Based Conversations: Ready-to-use persistent conversation threads with unique identifiers
- Token-Level Streaming: Real-time response streaming with WebSocket-like experience
- Enterprise-Grade Error Handling: Comprehensive error handling and logging infrastructure
- Thread Management: Complete thread lifecycle management (create, retrieve, clear, archive)
- RESTful API: Well-designed API structure following enterprise patterns
- Type Safety: Full TypeScript-style type annotations with Pydantic models
This template implements thread-based persistence using LangGraph's MemorySaver pattern:
- Thread Checkpoints: Each conversation thread maintains its state
- Conversation Continuity: Threads can be resumed exactly where they left off
- Message History: Complete conversation history retrieval for any thread
graph TD
A[User Request] --> B[Thread Manager]
B --> C[Load Thread State]
C --> D[Agent Processing]
D --> E[Generate Response]
E --> F[Save Thread State]
F --> G[Stream Response]
# Create a new repository from this template
# Click "Use this template" on GitHub, or clone directly:
git clone https://github.com/posgnu/fastapi-langraph.git your-project-name
cd your-project-name
# Rename the project
# Update pyproject.toml, README.md, and other references as needed# Create environment file
cp .env.example .env
# Add your OPENAI_API_KEY and customize other settings
# Install dependencies
poetry installEdit fastapi_langraph/agent/ to implement your specific agent logic:
# Example: Customize the agent behavior
# In fastapi_langraph/agent/tools/
# Add your custom tools and capabilitiespoetry run uvicorn fastapi_langraph.main:app --reloadThe server will start at http://localhost:8000 with:
- API Documentation:
http://localhost:8000/docs - Service Info:
http://localhost:8000/info
# Use the included chat client to test your agent
poetry run python scripts/chat.pyThis template provides a complete API structure that you can build upon:
Stream conversational responses with thread persistence.
Request Format:
{
"input": "Your user message here",
"thread_id": "optional-thread-id",
"session_metadata": {
"client": "your_app",
"timestamp": "2024-01-01T00:00:00Z"
}
}Response Stream Format:
{"type": "metadata", "thread_id": "abc-123", "metadata": {"thread_created": true}}
{"type": "token", "content": "Response", "thread_id": "abc-123"}
{"type": "metadata", "thread_id": "abc-123", "metadata": {"status": "completed"}}GET /threads/{thread_id}/history- Retrieve conversation historyDELETE /threads/{thread_id}- Delete thread and historyPUT /threads/{thread_id}/clear- Clear thread messagesPUT /threads/{thread_id}/archive- Archive thread (customize as needed)
GET /info- Service information and capabilities
Replace the default agent implementation in fastapi_langraph/agent/:
# Example: Custom agent with your tools
from your_tools import CustomTool1, CustomTool2
def create_your_agent():
# Implement your agent logic here
return agentExtend the tools directory:
# Add your tools in fastapi_langraph/agent/tools/
touch fastapi_langraph/agent/tools/your_custom_tool.pyReplace the in-memory storage with your preferred database:
# For SQLite
from langgraph.checkpoint.sqlite import SqliteSaver
checkpointer = SqliteSaver("./your_app.db")
# For PostgreSQL
from langgraph.checkpoint.postgres import PostgresSaver
checkpointer = PostgresSaver(connection_string="postgresql://...")Add your middleware in fastapi_langraph/middleware/:
# Example: Authentication, rate limiting, etc.
@app.middleware("http")
async def your_custom_middleware(request: Request, call_next):
# Your middleware logic
passExtend or modify the API routers in fastapi_langraph/api/routers/:
# Add your custom endpoints
@router.post("/your-custom-endpoint")
async def your_endpoint():
# Your endpoint logic
pass# Update fastapi_langraph/core/config.py
class Settings(BaseSettings):
project_name: str = "Your Project Name"
description: str = "Your project description"
# Add your custom settings# Production-ready persistence options
# SQLite for small-scale
# PostgreSQL for enterprise scale
# Redis for caching# Add your monitoring solution
# Prometheus, DataDog, New Relic, etc.# Add your tests in tests/
poetry run pytest tests/# Test your customized endpoints
curl -X POST "http://localhost:8000/chat/stream" \
-H "Content-Type: application/json" \
-d '{"input": "Test your agent", "thread_id": "test-thread"}' \
--no-buffer# Use the included interactive client
poetry run python scripts/chat.pyyour-project/
βββ fastapi_langraph/ # Main application package
β βββ agent/ # Agent implementation (customize this)
β β βββ tools/ # Agent tools (add your tools here)
β βββ api/ # API layer
β β βββ routers/ # API routers (extend as needed)
β βββ core/ # Core configuration and utilities
β βββ middleware/ # Custom middleware
βββ scripts/ # Utility scripts
βββ tests/ # Test suite (add your tests)
βββ pyproject.toml # Dependencies (update for your project)
βββ README.md # This file (customize for your project)
# Required
OPENAI_API_KEY=your-openai-api-key
# Customize these for your project
PROJECT_NAME=Your-Project-Name
DESCRIPTION=Your project description
LOG_LEVEL=INFO
MAX_CONVERSATION_LENGTH=20# Customize in fastapi_langraph/core/config.py
AGENT_CONFIG = {
"model": "gpt-4o-mini", # Choose your model
"temperature": 0.1, # Adjust for your use case
"max_context_messages": 20,
"streaming": True
}- Clone/Fork this template
- Customize the agent logic for your use case
- Add your specific tools and capabilities
- Configure your persistence backend
- Deploy to your preferred platform
- Monitor and iterate
Improvements to this template are welcome:
- Fork the template repository
- Create a feature branch
- Add comprehensive tests
- Follow the existing code style
- Submit a pull request
MIT License - see LICENSE file for details.
- Issues: Submit GitHub issues for template bugs and improvements
- Discussions: Use GitHub Discussions for questions about using this template
β Star this template if it helps you build amazing AI applications!
Built with β€οΈ using FastAPI, LangGraph, and OpenAI