Query and manage databases through the Model Context Protocol.
The Database MCP Server provides AI-accessible database operations for PostgreSQL and MongoDB. It enables:
- PostgreSQL queries and data management
- MongoDB document operations
- Automatic schema discovery
- Network-wide database access through MCP Discovery Hub
- Zero-configuration deployment with automatic broadcasting
Perfect for building AI applications that need to interact with databases safely and efficiently.
- Get server version and database info
- List tables in any schema
- Query data with configurable limits
- Insert new records
- SQL validation and safety checks
- List collections
- Find documents with filters
- Insert documents
- ObjectId handling and JSON serialization
- Automatic multicast broadcasting for discovery
- Multi-transport support (HTTP and streamable-http)
- Compatible with MCP Discovery Hub
- Zero-configuration networking
- Python 3.10+
- PostgreSQL server (or MongoDB, or both)
uv
package manager (orpip
)
# Clone or navigate to project
cd database-mcp-server
# Install dependencies
uv sync
# Or with pip:
pip install -r requirements.txt
# Transport mode
MCP_TRANSPORT=http # http, streamable-http, or stdio (default)
# Server settings
MCP_HOST=0.0.0.0 # Binding host
MCP_PORT=3002 # Server port
MCP_SERVER_NAME=Database MCP Server # Display name
# PostgreSQL
DATABASE_URL=postgresql://user:pass@localhost:5432/dbname
# Or individual settings:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=postgres
# MongoDB
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB=test
# Broadcasting (for MCP Discovery Hub)
MCP_ENABLE_BROADCAST=true # Enable/disable broadcasting
MCP_BROADCAST_INTERVAL=30 # Seconds between announcements
Create a .env
file in the project root:
# Database Connections
DATABASE_URL=postgresql://postgres:password@localhost:5432/mydb
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB=mydb
# MCP Server
MCP_TRANSPORT=http
MCP_PORT=3002
MCP_SERVER_NAME=Database MCP Server
MCP_ENABLE_BROADCAST=true
MCP_BROADCAST_INTERVAL=30
# With PostgreSQL in Docker
docker run -d \
-e POSTGRES_PASSWORD=mypassword \
-p 5432:5432 \
postgres:15
# With MongoDB in Docker
docker run -d \
-p 27017:27017 \
mongo:latest
# Start MCP server
MCP_TRANSPORT=http MCP_PORT=3002 uv run main.py
# Using environment variables
MCP_TRANSPORT=http MCP_PORT=3002 uv run main.py
# Or with .env file
uv run main.py
MCP_TRANSPORT=streamable-http MCP_PORT=3002 uv run main.py
# Default mode, works with Claude Desktop
uv run main.py
pg_version()
Retrieve PostgreSQL server version information
pg_list_tables(schema: str = "public")
List all tables in a schema
Example:
{
"method": "tools/call",
"params": {
"name": "pg_list_tables",
"arguments": { "schema": "public" }
}
}
pg_list_rows(table: str, limit: int = 100)
Query data from a table with limit
Example:
{
"method": "tools/call",
"params": {
"name": "pg_list_rows",
"arguments": { "table": "users", "limit": 50 }
}
}
pg_insert_row(table: str, data: dict)
Insert a new record and return the inserted ID
Example:
{
"method": "tools/call",
"params": {
"name": "pg_insert_row",
"arguments": {
"table": "users",
"data": { "name": "John", "email": "john@example.com" }
}
}
}
mongo_list_collections()
Get all collection names in the database
mongo_find(
collection: str,
query: dict = {},
limit: int = 10
)
Query documents from a collection
Example:
{
"method": "tools/call",
"params": {
"name": "mongo_find",
"arguments": {
"collection": "users",
"query": { "status": "active" },
"limit": 20
}
}
}
mongo_insert(collection: str, doc: dict)
Insert a document into a collection
Example:
{
"method": "tools/call",
"params": {
"name": "mongo_insert",
"arguments": {
"collection": "logs",
"doc": {
"timestamp": "2024-10-17T10:00:00Z",
"level": "info",
"message": "Server started"
}
}
}
}
When broadcasting is enabled, the database server automatically registers:
- Server broadcasts: Every 30 seconds on
239.255.255.250:5353
- Hub discovers: Discovery hub receives and probes the server
- Tools registered: All 7 database tools become available network-wide
Deploy multiple database servers for different purposes:
Database Server 1 (PostgreSQL, port 3002)
↓
Database Server 2 (MongoDB, port 3003)
↓
Database Server 3 (Mixed, port 3004)
↓
MCP Discovery Hub (port 8000)
↓
AI Tool (Claude, etc.)
All servers discovered and available to AI automatically.
Server information
curl http://localhost:3002/
MCP protocol endpoint
All MCP communication (initialize, tools/list, tools/call)
AI-powered analysis of your database:
"User: Summarize user activity from the last month"
AI: I'll query the activity logs for you...
→ calls pg_list_rows(table="activity_logs", limit=1000)
→ analyzes and summarizes results
Generate reports from database data:
"User: Create a report of orders by region"
AI: Let me fetch the order data...
→ calls pg_list_rows(table="orders", limit=10000)
→ groups and aggregates by region
→ generates report
AI-assisted data entry:
"User: Add a new customer with this information"
AI: I'll add them to the database...
→ calls pg_insert_row(table="customers", data={...})
MongoDB document management:
"User: Find all documents with status pending"
AI: Searching for pending documents...
→ calls mongo_find(collection="tasks", query={"status": "pending"})
Database health and activity monitoring:
"User: Check if there are any slow queries"
AI: Let me check the query logs...
→ calls pg_list_rows(table="query_logs")
→ identifies slow queries
- Table and column names validated against regex
- SQL injection prevention through parameterized queries
- Data type validation for inserts
- Database connection errors caught and reported
- Timeout protection (30 seconds default)
- Clear error messages for debugging
- Read-only operations first: Start with queries before modifying data
- Use limits: Always set reasonable limits on queries
- Monitor logs: Check
database_mcp.log
for issues - Backup data: Ensure backups before AI access to production
- Audit trail: Log all database modifications from MCP
- Query performance: Depends on query complexity and data size
- Connection pooling: PostgreSQL pool_size=5 for concurrency
- Broadcasting overhead: Minimal (30-byte UDP packets)
- Timeout protection: 30-second limit on operations
- Use
limit
parameter to reduce data transfer - Filter documents with
query
parameter in MongoDB - Create appropriate database indexes for common queries
- Use
schema
parameter to narrow PostgreSQL searches
Server logs are written to database_mcp.log
:
# View logs
tail -f database_mcp.log
# Check for errors
grep ERROR database_mcp.log
# Monitor database operations
grep "Listing tables\|Inserting\|Finding" database_mcp.log
# Check PostgreSQL is running
psql postgresql://user:pass@localhost:5432/db
# Verify credentials in .env
echo $DATABASE_URL
# Check MongoDB is running
mongo --eval "db.version()"
# Verify connection string
echo $MONGODB_URL
# Verify multicast is enabled
ip route show | grep 239.255.255.250
# Check firewall settings
sudo firewall-cmd --list-all
# Use different port
MCP_PORT=3003 uv run main.py
Typical response times:
- Simple SELECT: 10-50ms
- Database info queries: 5-20ms
- MongoDB find operations: 20-100ms
- Insert operations: 30-200ms (depending on triggers)
Network overhead (with broadcasting):
- Broadcasting: 0.01% overhead
- Discovery: One-time cost per server
- Python 3.10+
- FastAPI
- SQLAlchemy
- PyMongo
- FastMCP
- python-dotenv
Improvements welcome! Potential enhancements:
- Additional database support (MySQL, SQLite)
- Stored procedure execution
- Transaction support
- Advanced query builder
- Connection pooling configuration
- Database replication support
MIT License - See LICENSE file for details
- Issues: Report on GitHub
- Documentation: See MCP Discovery Hub wiki
- Examples: Check examples/ directory
- Database docs: PostgreSQL and MongoDB official documentation