An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying.
This bridge implements the "Code Execution with MCP" pattern—a revolutionary approach to using Model Context Protocol tools. Instead of exposing all MCP tools directly to Claude (consuming massive context), the bridge:
- Auto-discovers configured MCP servers
- Proxies tools into sandboxed code execution
- Eliminates context overhead (95%+ reduction)
- Enables complex workflows through Python code
- Rootless containers - No privileged helpers required
- Network isolation - No network access
- Read-only filesystem - Immutable root
- Dropped capabilities - No system access
- Unprivileged user - Runs as UID 65534
- Resource limits - Memory, PIDs, CPU, time
- Auto-cleanup - Temporary IPC directories
- Persistent clients - MCP servers stay warm
- Context efficiency - 95%+ reduction vs traditional MCP
- Async execution - Proper resource management
- Single tool - Only
run_pythonin Claude's context
- Multiple access patterns:
mcp_servers["server"] # Dynamic lookup mcp_server_name # Attribute access from mcp.servers.server import * # Module import
- Top-level await - Modern Python patterns
- Type-safe - Proper signatures and docs
- TOON responses - Tool outputs are emitted as TOON code blocks for token-efficient prompting
- We encode every MCP bridge response using Token-Oriented Object Notation (TOON).
- TOON collapses repetitive JSON keys and emits newline-aware arrays, trimming token counts 30-60% for uniform tables so LLM bills stay lower.
- Clients that expect plain JSON can still recover the structured payload: the TOON code block includes the same fields (status, stdout, stderr, etc.) and we fall back to JSON automatically if the encoder is unavailable.
- Install a rootless container runtime (Podman or Docker).
- macOS:
brew install podmanorbrew install --cask docker - Ubuntu/Debian:
sudo apt-get install -y podmanorcurl -fsSL https://get.docker.com | sh
- macOS:
- Install uv to manage this project:
curl -LsSf https://astral.sh/uv/install.sh | sh - Pull a Python base image once your runtime is ready:
podman pull python:3.12-slim # or docker pull python:3.12-slim
Use uv to sync the project environment:
uv syncuv run python mcp_server_code_execution_mode.pyFile: ~/.config/mcp/servers/mcp-server-code-execution-mode.json
{
"mcpServers": {
"mcp-server-code-execution-mode": {
"command": "uv",
"args": ["run", "python", "/absolute/path/to/mcp_server_code_execution_mode.py"],
"env": {
"MCP_BRIDGE_RUNTIME": "podman"
}
}
}
}Restart Claude Code
# Use MCP tools in sandboxed code
result = await mcp_filesystem.read_file(path='/tmp/test.txt')
# Complex workflows
data = await mcp_search.search(query="TODO")
await mcp_github.create_issue(repo='owner/repo', title=data.title)┌─────────────┐
│ MCP Client │ (Claude Code)
└──────┬──────┘
│ stdio
▼
┌──────────────┐
│ MCP Code Exec │ ← Discovers, proxies, manages
│ Bridge │
└──────┬──────┘
│ container
▼
┌─────────────┐
│ Container │ ← Executes with strict isolation
│ Sandbox │
└─────────────┘
Process:
- Client calls
run_python(code, servers, timeout) - Bridge loads requested MCP servers
- Prepares a sandbox invocation: collects MCP tool metadata, writes an entrypoint into a shared
/ipcvolume, and exportsMCP_AVAILABLE_SERVERS - Generated entrypoint rewires stdio into JSON-framed messages and proxies MCP calls over the container's stdin/stdout pipe
- Runs container with security constraints
- Host stream handler processes JSON frames, forwards MCP traffic, enforces timeouts, and cleans up
| Variable | Default | Description |
|---|---|---|
MCP_BRIDGE_RUNTIME |
auto | Container runtime (podman/docker) |
MCP_BRIDGE_IMAGE |
python:3.12-slim | Container image |
MCP_BRIDGE_TIMEOUT |
30s | Default timeout |
MCP_BRIDGE_MAX_TIMEOUT |
120s | Max timeout |
MCP_BRIDGE_MEMORY |
512m | Memory limit |
MCP_BRIDGE_PIDS |
128 | Process limit |
MCP_BRIDGE_CPUS |
- | CPU limit |
MCP_BRIDGE_CONTAINER_USER |
65534:65534 | Run as UID:GID |
MCP_BRIDGE_RUNTIME_IDLE_TIMEOUT |
300s | Shutdown delay |
MCP_BRIDGE_STATE_DIR |
./.mcp-bridge |
Host directory for IPC sockets and temp state |
Scanned Locations:
~/.claude.json~/Library/Application Support/Claude Code/claude_code_config.json~/Library/Application Support/Claude/claude_code_config.json(early Claude Code builds)~/Library/Application Support/Claude/claude_desktop_config.json(Claude Desktop fallback)~/.config/mcp/servers/*.json./claude_code_config.json./claude_desktop_config.json(project-local fallback)./mcp-servers/*.json
Example Server (~/.config/mcp/servers/filesystem.json):
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
}
}
}When you rely on docker mcp gateway run to expose third-party MCP servers, the bridge simply executes the gateway binary. The gateway is responsible for pulling tool images and wiring stdio transports, so make sure the host environment is ready:
- Run
docker loginfor every registry referenced in the gateway catalog (e.g. Docker Hubmcp/*images,ghcr.io/github/github-mcp-server). Without cached credentials the pull step fails before any tools come online. - Provide required secrets for those servers—
github-officialneedsgithub.personal_access_token, others may expect API keys or auth tokens. Usedocker mcp secret set <name>(or whichever mechanism your gateway is configured with) so the container sees the values at start-up. - Mirror any volume mounts or environment variables that the catalog expects (filesystem paths, storage volumes, etc.). Missing mounts or credentials commonly surface as
failed to connect: calling "initialize": EOFduring the stdio handshake. - If
list_toolsonly returns the internal management helpers (mcp-add,code-mode, …), the gateway never finished initializing the external servers—check the gateway logs for missing secrets or registry access errors.
- Runtime artifacts (including the generated
/ipc/entrypoint.pyand related handshake metadata) live under./.mcp-bridge/by default. SetMCP_BRIDGE_STATE_DIRto relocate them. - When the selected runtime is Podman, the bridge automatically issues
podman machine set --rootful --now --volume <state_dir>:<state_dir>so the VM can mount the directory. - Docker Desktop does not expose a CLI for file sharing; ensure the chosen state directory is marked as shared in Docker Desktop → Settings → Resources → File Sharing before running the bridge.
- To verify a share manually, run
docker run --rm -v $PWD/.mcp-bridge:/ipc alpine ls /ipc(or the Podman equivalent) and confirm the files are visible.
# List and filter files
files = await mcp_filesystem.list_directory(path='/tmp')
for file in files:
content = await mcp_filesystem.read_file(path=file)
if 'TODO' in content:
print(f"TODO in {file}")# Extract data
transcript = await mcp_google_drive.get_document(documentId='abc123')
# Process
summary = transcript[:500] + "..."
# Store
await mcp_salesforce.update_record(
objectType='SalesMeeting',
recordId='00Q5f000001abcXYZ',
data={'Notes': summary}
)# Jira → GitHub migration
issues = await mcp_jira.search_issues(project='API', status='Open')
for issue in issues:
details = await mcp_jira.get_issue(id=issue.id)
if 'bug' in details.description.lower():
await mcp_github.create_issue(
repo='owner/repo',
title=f"Bug: {issue.title}",
body=details.description
)from mcp import runtime
print("Discovered:", runtime.discovered_servers())
print("Loaded metadata:", runtime.list_loaded_server_metadata())
print("Selectable via RPC:", await runtime.list_servers())
# Peek at tool docs for a server that's already loaded in this run
loaded = runtime.list_loaded_server_metadata()
if loaded:
first = runtime.describe_server(loaded[0]["name"])
for tool in first["tools"]:
print(tool["alias"], "→", tool.get("description", ""))Example output seen by the LLM when running the snippet above with the stub server:
Discovered: ('stub',)
Loaded metadata: ({'name': 'stub', 'alias': 'stub', 'tools': [{'name': 'echo', 'alias': 'echo', 'description': 'Echo the provided message', 'input_schema': {...}}]},)
Selectable via RPC: ('stub',)
| Constraint | Setting | Purpose |
|---|---|---|
| Network | --network none |
No external access |
| Filesystem | --read-only |
Immutable base |
| Capabilities | --cap-drop ALL |
No system access |
| Privileges | no-new-privileges |
No escalation |
| User | 65534:65534 |
Unprivileged |
| Memory | --memory 512m |
Resource cap |
| PIDs | --pids-limit 128 |
Process cap |
| Workspace | tmpfs, noexec | Safe temp storage |
| Action | Allowed | Details |
|---|---|---|
| Import stdlib | ✅ | Python standard library |
| Access MCP tools | ✅ | Via proxies |
| Memory ops | ✅ | Process data |
| Write to disk | ✅ | Only /tmp, /workspace |
| Network | ❌ | Completely blocked |
| Host access | ❌ | No system calls |
| Privilege escalation | ❌ | Prevented by sandbox |
| Container escape | ❌ | Rootless + isolation |
- README.md - This file, quick start
- GUIDE.md - Comprehensive user guide
- ARCHITECTURE.md - Technical deep dive
- HISTORY.md - Evolution and lessons
- STATUS.md - Current state and roadmap
- Rootless container sandbox
- Single
run_pythontool - MCP server proxying
- Persistent clients
- Comprehensive docs
- Automated testing
- Observability (logging, metrics)
- Policy controls
- Runtime diagnostics
- Connection pooling
- Web UI
- Multi-language support
- Workflow orchestration
GPLv3 License
For issues or questions, see the documentation or file an issue.