A Model Context Protocol (MCP) server for managing a local SQLite database using FastMCP with a streamable HTTP transport, plus an LLM-powered MCP client that can reason about user questions and automatically decide which database tools to call.
This project demonstrates:
- β An MCP-compliant database server
- β SQLite-backed CRUD operations exposed as MCP tools
- β HTTP (streamable) MCP transport
- β A Python client using LangChain + Ollama
- β Real LLM-driven decision-making over MCP tools
pip install -r requirements.txtpython main_app.py- MCP Endpoint: http://127.0.0.1:8000/mcp
- Health Check: http://127.0.0.1:8000/health
- FastAPI Docs: http://127.0.0.1:8000/docs
- ReDoc: http://127.0.0.1:8000/redoc
β οΈ MCP must be mounted at the root path usingmcp.streamable_http_app().
- π Model Context Protocol (MCP) compliant
- β‘ FastMCP with streamable HTTP transport
- πΎ SQLite local database
- π οΈ 8 structured database tools
- π JSON-formatted responses
- π Simple local deployment
- π Streamable HTTP MCP client
- π Automatic tool discovery
- π LangChain
StructuredToolintegration - π€ Ollama-powered local LLM
- π§ Multi-step tool execution and reasoning loop
DB_FILE=data.db
HOST=127.0.0.1
PORT=8000$env:DB_FILE="C:\path\to\data.db"
$env:PORT="8000"
python server.pyCREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);CREATE TABLE products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
price REAL NOT NULL,
stock INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);| Tool Name | Description |
|---|---|
execute_query |
Execute SQL SELECT queries |
insert_user |
Insert a new user |
insert_product |
Insert a new product |
update_user |
Update user name or email |
delete_user |
Delete a user by ID |
get_all_users |
Retrieve all users |
get_all_products |
Retrieve all products |
get_database_info |
View database schema information |
All tools return formatted JSON strings.
The client (client.py) connects to the MCP server and allows an LLM to:
- π Discover available MCP tools
- π¬ Analyze natural-language questions
- π― Decide which tools to call
- βοΈ Execute tools automatically
- β Produce a final response
Install Ollama:
- Download from ollama.ai
- Pull a model:
ollama pull llama3.2:1b
# or for better tool calling:
ollama pull llama3.2:3bInstall Python dependencies:
pip install langchain-ollama langchain-core mcpStart the MCP server first:
python main_app.pyThen run the client:
python client.pyThe LLM dynamically selects and executes the correct MCP tools:
β
"Show me all users in the database"
β
"Add a new user named Charlie Brown with email charlie@peanuts.com"
β
"What's the structure of the database?"
β
"List all products available"
β
"Insert a product called Laptop with price 999.99"
---
.
βββ main_app.py # MCP server (FastMCP)
βββ database_tools.py # Database CRUD operations
βββ client.py # LLM-powered MCP client
βββ requirements.txt # Python dependencies
βββ data.db # SQLite database (auto-created)
βββ README.md # This file
Test the MCP server with:
- Claude Desktop (MCP integration)
- Custom Python client (
client.py) - Any MCP-compatible client
python test_fastmcp.pyTransportSecuritySettings(enable_dns_rebinding_protection=False)Enable this for production deployments.
fastapi
uvicorn[standard]
mcp
langchain-ollama
langchain-coreUser Query β MCP Client β HTTP Request β FastMCP Server β Database Tools β SQLite
β
User Answer β LLM Processing β Tool Results β JSON Response β Database Query
1. Connect to MCP server via streamable HTTP
2. Discover available tools
3. Convert MCP tools to LangChain tools
4. Bind tools to LLM (Ollama)
5. User asks question in natural language
6. LLM analyzes question and decides which tools to call
7. Execute tools via MCP
8. LLM formulates final answer
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
If you encounter issues:
- Check that Ollama is running:
ollama list - Verify server is running:
curl http://127.0.0.1:8000/health - Check server logs for errors
- Try a larger model:
ollama pull llama3.2:3b