A fast and efficient Model Context Protocol (MCP) proxy server written in Rust. This proxy aggregates multiple MCP servers and provides a unified interface, with built-in monitoring, health checks, and a web UI for management.
New: The project now includes a modern Yew-based web UI alongside the original HTML/JS UI. See Yew UI Integration for details.
- Multi-Server Proxy: Aggregate multiple MCP servers into a single endpoint
- Multiple Transports: Support for stdio, HTTP/SSE, and WebSocket transports
- File-based Logging: All server output captured to rotating log files with real-time streaming
- Configuration Management: YAML/JSON configuration with environment variable substitution
- Server Lifecycle Management: Start, stop, and restart individual servers
- Health Monitoring: Automatic health checks with configurable intervals
- Web UI Dashboard: Real-time server status monitoring and control
- Metrics Collection: Prometheus-compatible metrics for monitoring
- Connection Pooling: Efficient connection management with automatic reconnection
- Graceful Shutdown: Clean shutdown of all servers and connections
- Create a configuration file
mcp-proxy.yaml
:
servers:
example-server:
command: "mcp-server-example"
args: ["--port", "8080"]
transport:
type: stdio
restartOnFailure: true
proxy:
port: 3000
host: "0.0.0.0"
webUI:
enabled: true
port: 3001
- Run the proxy server:
cargo run -- --config mcp-proxy.yaml
- Access the web UI at
http://localhost:3001
MCP Rust Proxy works seamlessly with Claude Code to manage multiple MCP servers. Here are some example configurations:
servers:
filesystem-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
transport:
type: stdio
env:
NODE_OPTIONS: "--max-old-space-size=4096"
github-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
transport:
type: stdio
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"
postgres-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
transport:
type: stdio
proxy:
port: 3000
host: "127.0.0.1"
webUI:
enabled: true
port: 3001
Then configure Claude Code to use the proxy via MCP remote server:
{
"mcpServers": {
"rust-proxy": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-remote", "http://localhost:3000"]
}
}
}
servers:
# Code intelligence server
code-intel:
command: "rust-analyzer"
args: ["--stdio"]
transport:
type: stdio
# Database tools
db-tools:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-sqlite", "./dev.db"]
transport:
type: stdio
# Custom project server
project-server:
command: "python"
args: ["./scripts/mcp_server.py"]
transport:
type: stdio
env:
PROJECT_ROOT: "${PWD}"
DEBUG: "true"
# Health checks for critical servers
healthChecks:
code-intel:
enabled: true
intervalSeconds: 30
timeoutMs: 5000
threshold: 3
proxy:
port: 3000
connectionPoolSize: 10
maxConnectionsPerServer: 5
webUI:
enabled: true
port: 3001
apiKey: "${WEB_UI_API_KEY}"
servers:
api-gateway:
command: "mcp-api-gateway"
transport:
type: webSocket
url: "ws://api-gateway:8080/mcp"
restartOnFailure: true
maxRestarts: 5
restartDelayMs: 10000
ml-models:
command: "mcp-ml-server"
transport:
type: httpSse
url: "http://ml-server:9000/sse"
headers:
Authorization: "Bearer ${ML_API_KEY}"
vector-db:
command: "mcp-vector-server"
args: ["--collection", "production"]
transport:
type: stdio
env:
PINECONE_API_KEY: "${PINECONE_API_KEY}"
PINECONE_ENV: "production"
healthChecks:
api-gateway:
enabled: true
intervalSeconds: 10
timeoutMs: 3000
threshold: 2
ml-models:
enabled: true
intervalSeconds: 30
timeoutMs: 10000
proxy:
port: 3000
host: "0.0.0.0"
connectionPoolSize: 50
requestTimeoutMs: 30000
webUI:
enabled: true
port: 3001
host: "0.0.0.0"
apiKey: "${ADMIN_API_KEY}"
logging:
level: "info"
format: "json"
The proxy server can be configured using YAML or JSON files. Configuration files are searched in the following order:
mcp-proxy.toml
mcp-proxy.json
mcp-proxy.yaml
mcp-proxy.yml
All configuration values support environment variable substitution using the ${VAR}
syntax:
servers:
api-server:
command: "api-server"
env:
API_KEY: "${API_KEY}"
transport:
type: httpSse
url: "${API_URL:-http://localhost:8080}/sse"
Each server configuration supports:
command
: The executable to runargs
: Command line argumentsenv
: Environment variables for the processtransport
: Transport configuration (stdio, httpSse, webSocket)restartOnFailure
: Whether to restart on failure (default: true)maxRestarts
: Maximum number of restart attempts (default: 3)restartDelayMs
: Delay between restarts in milliseconds (default: 5000)
The proxy captures all server output to rotating log files:
- Location:
~/.mcp-proxy/logs/{server-name}/server.log
- Format:
[timestamp] [STDOUT|STDERR] message
- Rotation: Automatic at 10MB, 2-day retention
- API Access:
GET /api/logs/{server}?lines=N&type=stdout|stderr
GET /api/logs/{server}/stream
(Server-Sent Events)
The web UI can be configured with:
enabled
: Whether to enable the web UI (default: true)port
: Port to listen on (default: 3001)host
: Host to bind to (default: "0.0.0.0")apiKey
: Optional API key for authentication
The proxy server is built with:
- Tokio: Async runtime for high-performance I/O
- Warp: Web framework for the proxy and web UI
- DashMap: Lock-free concurrent hash maps
- Prometheus: Metrics collection and export
- Serde: Configuration serialization/deserialization
- Yew: Rust/WASM framework for the web UI
The project includes a Nix flake for reproducible builds and development environments:
# Enter development shell with all tools
nix develop
# Build the project
nix build
# Build for specific platforms
nix build .#x86_64-linux
nix build .#aarch64-linux
nix build .#x86_64-darwin # macOS only
nix build .#aarch64-darwin # macOS only
# Build Docker image
nix build .#docker
# Run directly
nix run github:zach-source/mcp-rust-proxy
# Install direnv: https://direnv.net
direnv allow
# Now all tools are automatically available when you cd into the project
To use the binary cache for faster builds:
# Install cachix
nix-env -iA cachix -f https://cachix.org/api/v1/install
# Use the project's cache
cachix use mcp-rust-proxy
For maintainers building and pushing to cache:
# Build and push to cache
nix build .#x86_64-linux | cachix push mcp-rust-proxy
# Build without UI (faster for development)
cargo build --release
# Build with UI (requires trunk)
BUILD_YEW_UI=1 cargo build --release
cargo test
src/
├── config/ # Configuration loading and validation
├── transport/ # Transport implementations (stdio, HTTP/SSE, WebSocket)
├── proxy/ # Core proxy logic and request routing
├── server/ # Server lifecycle management
├── state/ # Application state and metrics
├── logging/ # File-based logging system
├── web/ # Web UI and REST API
└── main.rs # Application entry point
yew-ui/ # Rust/WASM web UI
├── src/
│ ├── components/ # Yew components
│ ├── api/ # API client and WebSocket handling
│ └── types/ # Shared types
└── style.css # UI styles
- Web UI: Click "Logs" button for any server to view real-time logs
- Files: Check
~/.mcp-proxy/logs/{server-name}/server.log
- API:
curl http://localhost:3001/api/logs/server-name?lines=50
- Stream:
curl http://localhost:3001/api/logs/server-name/stream
Prometheus metrics available at /metrics
:
mcp_proxy_requests_total
mcp_proxy_request_duration_seconds
mcp_proxy_active_connections
mcp_proxy_server_restarts_total
Configure health checks to monitor server availability:
healthChecks:
critical-server:
enabled: true
intervalSeconds: 30
timeoutMs: 5000
threshold: 3 # Failures before marking unhealthy
MIT