Intelligent FastAPI Documentation Search via Model Context Protocol (MCP)
A standalone MCP server that provides semantic search, topic exploration, and code ex## οΏ½ Documentation
π Complete Documentation - Full documentation with table of contents
-
Getting Started
- Installation - Set up your environment
- Initial Setup - Populate the cache
- Quick Start - Run your first queries
-
Integration Guides
- LLM Quick Start - Integrate with AI assistants
- Claude Desktop - Claude setup guide
- VS Code Setup - VS Code configuration
-
Development
- Testing Guide - Run and write tests
- Architecture - Understand the codebase
- Contributing - How to contributextraction for FastAPI documentation. Perfect for AI assistants like Claude, Copilot, and other LLM-powered tools.
- οΏ½ Semantic Search - Natural language queries powered by sentence transformers
- π Topic Listing - Browse all 104 FastAPI documentation topics
- π Topic Retrieval - Get full content for any specific topic
- π» Code Examples - Extract code snippets with language and syntax highlighting
- π Version Comparison - Compare features across different FastAPI versions
- β‘ Fast & Local - Runs entirely on your machine with ChromaDB vector storage
- π― Zero Config - Works out of the box with sensible defaults
- Python 3.10 or higher
- uv (recommended) or pip
# Clone the repository
git clone https://github.com/yourusername/mcp-fastapi-docs.git
cd mcp-fastapi-docs
# Install dependencies with uv (recommended)
uv sync
# Or with pip
pip install -e .Populate the cache with FastAPI documentation:
uv run python scripts/populate_cache.pyThis downloads FastAPI docs and builds the vector database (takes 2-3 minutes).
Configure your MCP client (Claude Desktop, VS Code, etc.):
{
"mcpServers": {
"fastapi-docs": {
"command": "uv",
"args": ["run", "mcp-fastapi-docs"],
"env": {
"CACHE_DIR": "/absolute/path/to/cache",
"LOG_LEVEL": "INFO"
}
}
}
}See LLM_QUICK_START.md for detailed integration guides.
import asyncio
from mcp_fastapi_docs.server.tools import search_docs, list_topics, get_topic
async def main():
# Search documentation
results = await search_docs("how to use path parameters", version="latest")
# List all topics
topics = await list_topics(version="latest")
# Get specific topic
content = await get_topic("Path Parameters", version="latest")
asyncio.run(main())# Manual testing
uv run python scripts/test_mcp_manual.py
# Interactive CLI
uv run python scripts/test_interactive.pyThe MCP server exposes 6 powerful tools:
| Tool | Description | Parameters |
|---|---|---|
search_docs |
Semantic search across documentation | query, version, max_results |
list_versions |
List all available documentation versions | None |
list_topics |
List all available topics | version |
get_topic |
Get full content for a topic | topic, version |
get_example |
Extract code examples for a feature | feature, version |
compare_versions |
Compare a feature across versions | feature, versions |
All tools support version="latest" which automatically resolves to the newest available version.
mcp-fastapi-docs/
βββ src/mcp_fastapi_docs/
β βββ core/ # Cache, fetcher, processor
β βββ models/ # Pydantic models & config
β βββ search/ # Semantic search engine
β βββ server/ # MCP server & tool registry
β βββ utils/ # Logging utilities
βββ scripts/ # Setup & testing scripts
βββ cache/ # Local documentation & embeddings
βββ docs/ # Documentation guides
βββ tests/ # Pytest test suite
βββ examples/ # Usage examples
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src/mcp_fastapi_docs --cov-report=html
# Test specific functionality
uv run python scripts/test_search.pySee TESTING.md for comprehensive testing guide.
This project includes VS Code MCP server configuration:
- Open workspace in VS Code
- MCP server auto-registers via
.vscode/mcp.json - Use Copilot or other MCP extensions to query FastAPI docs
See .vscode/README_MCP.md for details.
This project is designed as a learning template for building MCP servers:
- Self-contained - All components in one repo
- Well-documented - Extensive inline comments & guides
- Modular design - Easy to adapt for other frameworks
- Best practices - Type hints, async/await, proper error handling
Want to build similar servers for Flask or Django? This codebase serves as a complete reference implementation.
- LLM Quick Start - Get started with LLM integration
- LLM Integration Guide - Detailed integration patterns
- Quick Test Guide - Testing walkthrough
- Testing Guide - Comprehensive test documentation
- VS Code Setup - VS Code configuration
Contributions are welcome! This project follows standard open-source practices:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Issue: No FastAPI documentation versions available
- Solution: Run
uv run python scripts/populate_cache.py
Issue: Import errors or missing dependencies
- Solution: Run
uv syncorpip install -e .
Issue: MCP server not responding
- Solution: Check logs in
cache/logs/mcp-fastapi-docs.log
MIT License - see LICENSE for details.
- FastAPI - The amazing framework this serves
- Model Context Protocol - The protocol powering AI integration
- ChromaDB - Vector database for semantic search
- Sentence Transformers - Embedding models
- Support for multiple FastAPI versions
- Incremental cache updates
- Full-text search alongside semantic search
- Web UI for documentation browsing
- API rate limiting and caching strategies
Built with β€οΈ for the FastAPI and AI community