Production-ready PostgreSQL MCP server with global connection pooling, PgBouncer integration, and automatic password rotation for AWS RDS.
# 1. Set AWS credentials
export AWS_ACCESS_KEY_ID=your_key
export AWS_SECRET_ACCESS_KEY=your_secret
export AWS_REGION=us-west-2
# 2. Deploy (fetches DB credentials from AWS)
python3 deploy_with_aws_secrets.py
# 3. Restart Claude Code to connect
# The deploy script shows the URL - it's http://localhost:3000/sse
That's it! The MCP server is now running and Claude Code will auto-detect it.
Optional: Set custom AWS secret names:
export AWS_SECRET_NAME=your/secret/name
export AWS_PASSWORD_SECRET_NAME=rds!your-password-id
python3 deploy_with_aws_secrets.py
# 1. Copy .env.example and fill in your database credentials
cd /path/to/postgres_mcp_allaccess
cp .env.example .env
# Edit .env and fill in the REQUIRED fields (first 5 lines)
# 2. Deploy (docker-compose reads .env automatically)
docker-compose up -d
# 3. Restart Claude Code to connect
# MCP server is at http://localhost:3000/sse
✅ Global connection pooling - 15 max connections to PostgreSQL (configurable) ✅ Automatic password rotation - Zero downtime when AWS rotates passwords ✅ Transaction-level pooling - Efficient connection reuse across all queries ✅ Production ready - Health checks, logging, monitoring built-in
Multiple Claude Sessions → MCP Server → PgBouncer → PostgreSQL
(90+ clients) (max 15) (max 15) (sees ≤15 connections)
- PgBouncer enforces max 15 connections to your PostgreSQL database
- Automatic queueing - Extra requests wait, then process when connections free up
- Zero config needed - Default settings work for most use cases
curl http://localhost:3000/health
# Should show: "OK - Pool: 2/15 connections"
docker-compose logs -f
docker-compose down
Edit pgbouncer/pgbouncer.ini
:
default_pool_size = 10 # Max connections to PostgreSQL
Then redeploy: docker-compose up -d --build
Query Execution:
list_tables
- List tables in a schemaexecute_query
- Run any SQL querydescribe_table
- See table structureexecute_file
- Run SQL from a file
Schema Operations:
list_schemas
- List all schemassearch_tables
- Find tables by name patternget_database_context
- Overview of database structure
Analysis:
explain_query
- Get query execution plananalyze_query
- Performance analysissuggest_indexes
- Index recommendations
Session:
get_session_info
- Current session statusget_query_history
- View query history
Solution: Restart Claude Code after deploying the MCP server.
Solution: Run any query first - pool initializes on first use.
# Check what went wrong
docker-compose logs
# Common fixes:
# 1. Port 3000 in use: Change MCP_SSE_PORT in .env
# 2. Bad credentials: Check .env file or AWS secrets
# 3. Build cache issue: docker-compose up -d --build --force-recreate
Increase pool timeout (default 30 seconds):
export POSTGRES_POOL_TIMEOUT=60
python3 deploy_with_aws_secrets.py # or docker-compose up -d
Check your limit:
docker exec -e PGPASSWORD='your_pass' postgres-mcp-allaccess \
psql -h localhost -p 6432 -U your_user -d your_db \
-c "SELECT count(*) FROM pg_stat_activity WHERE usename='your_user'"
Should show 2-15 connections (never exceeds default_pool_size
).
Environment variables (set before deploying):
POSTGRES_MAX_CONNECTIONS=15 # MCP pool size
POSTGRES_MIN_CONNECTIONS=2 # Warm connections
POSTGRES_POOL_TIMEOUT=30 # Queue wait time (seconds)
PgBouncer limits (edit pgbouncer/pgbouncer.ini
):
default_pool_size = 15 # MAX connections to PostgreSQL
min_pool_size = 2 # MIN kept alive
pool_mode = transaction # Release after each transaction
Question: If 90 Claude sessions query at once, does PostgreSQL see 90 connections?
Answer: NO! PostgreSQL sees maximum 15 connections (or whatever you set in default_pool_size
).
- Connections auto-release after each query
- Extra requests queue and process when connections free up
- No session management needed - everything is automatic
When AWS rotates your password:
- Next query fails with auth error
- MCP fetches new password from AWS Secrets Manager
- Updates both MCP pool and PgBouncer
- Retries query - succeeds!
Zero downtime. No restart needed.
Requirements:
- Deploy with
deploy_with_aws_secrets.py
(AWS credentials available) - AWS secret names configured (done automatically)
✅ Never commit:
.env
files*.log
filesconfig/postgres_config.ini
✅ Safe to commit:
- AWS secret names (e.g., "postgres/dev/db")
- Database hostnames (e.g., "mydb.rds.amazonaws.com")
All sensitive data is gitignored automatically.
postgres_mcp_allaccess/
├── deploy_with_aws_secrets.py # AWS deployment script
├── docker-compose.yml # Container orchestration
├── Dockerfile # Single container (MCP + PgBouncer)
├── docker-entrypoint.sh # Startup script
├── pgbouncer/pgbouncer.ini # Connection pool config
└── src/postgres_mcp_allaccess/
├── server.py # MCP server
├── database.py # Global pool + auto rotation
├── pgbouncer_manager.py # PgBouncer control
└── transports/sse_transport.py # HTTP/SSE transport
# Health check endpoint
curl http://localhost:3000/health
# Check PgBouncer process
docker exec postgres-mcp-allaccess ps aux | grep pgbouncer
# View all environment variables
docker exec postgres-mcp-allaccess env | grep -E "POSTGRES|AWS|PGBOUNCER"
# Force refresh from AWS (if auto-rotation fails)
docker-compose restart
Apache License 2.0 - See LICENSE file for details.
- Issues: https://github.com/yty-build/postgres_mcp_allaccess/issues
- Discussions: https://github.com/yty-build/postgres_mcp_allaccess/discussions
- Security: Report privately via GitHub Security Advisories