OneOffDockerPython is a REST API service built with FastAPI that allows you to run Docker containers with one-off commands. It supports pulling images from Docker registries with authentication, setting environment variables, and customizing command and entrypoint.
- Run Docker containers with one-off commands
- Pull images from private Docker registries with authentication
- Set environment variables for container execution
- Customize command and entrypoint for container execution
- Capture and return both stdout and stderr output
- Create Docker volume with base64 tar.gz image
- Separate MCP (Model Context Protocol) server with Streamable HTTP transport
- Python 3.10 or higher
- Docker
uv
(recommended) orpip
for package management- Dependencies managed via
pyproject.toml
docker run --rm -p 8000:8000 -p 8001:8001 -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/tumf/oneoff-docker-runner
Run One-off docker command like as:
curl -X 'POST' \
'http://0.0.0.0:8000/run' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"image": "alpine:latest",
"command": [
"/test.sh"
],
"volumes": {
"/app/data": {
"content": "H4sIAGq4eGYAA0tJLEnUZ6AtMDAwMDc1VQDTZhDawMgEQkOBgqGJmbGZobGJobGBgoGhkaGBGYOCKY3dBQalxSWJRUCnlJTmpuFTB1SWhk8B1B9wehSMglEwCgY5AADBaWLyAAYAAA==",
"response": true,
"type": "directory"
},
"/test.sh:ro": {
"mode" : "0755",
"content": "IyEvYmluL2FzaAoKZWNobyAiSGVsbG8sIFdvcmxkISIgPiAvYXBwL2RhdGEvdGVzdC50eHQ=",
"type": "file"
}
}
}'
- Install
uv
if you haven't already:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the repository:
git clone https://github.com/tumf/oneoff-docker-runner.git
cd oneoff-docker-runner
- Install dependencies and create virtual environment:
uv sync
- Clone the repository:
git clone https://github.com/tumf/oneoff-docker-runner.git
cd oneoff-docker-runner
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Install the dependencies:
pip install -e .
Create a .env
file in the project root (if needed) to set environment variables for Docker:
DOCKER_HOST=tcp://your-docker-host:2376
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/path/to/certs
- Start both servers:
# Option 1: Start both servers manually
python main.py & # REST API on port 8000
python mcp.py & # MCP Server on port 8001
# Option 2: Use the start script (recommended)
./start.sh
# Option 3: Use Docker (includes both servers)
docker run -d --name oneoff-docker-runner \
-p 8000:8000 -p 8001:8001 \
-v /var/run/docker.sock:/var/run/docker.sock \
oneoff-docker-runner
This starts:
- REST API (main.py):
http://localhost:8000
-/run
,/volume
,/health
,/docs
- MCP Server (mcp.py):
http://localhost:8001
-/mcp
(Streamable HTTP)
- Send a POST request to the
/run
endpoint with the following JSON body to run a Docker container:
{
"image": "alpine:latest",
"command": ["echo", "Hello, World!"],
"env_vars": {
"MY_VAR": "value"
},
"auth_config": {
"username": "your-username",
"password": "your-password",
"email": "your-email@example.com",
"serveraddress": "https://index.docker.io/v1/"
}
}
- The API will return a JSON response with the stdout and stderr output from the container:
{
"status": "success",
"stdout": "Hello, World!\n",
"stderr": ""
}
Execute Docker containers directly from AI clients (Claude Desktop, Cursor, n8n, etc.) using MCP Streamable HTTP transport.
# Both servers
./start.sh
# Or just MCP server
python mcp.py
The MCP server provides Streamable HTTP transport at:
- MCP Endpoint:
http://localhost:8001/mcp
- Protocol: MCP Streamable HTTP (2024-11-05)
- Content Types: JSON and Server-Sent Events (SSE)
n8n MCP Client Tool:
- MCP Endpoint:
http://localhost:8001/mcp
- Authentication: None
- Tools to Include: All
Claude Desktop:
Add to claude_desktop_config.json
:
{
"mcpServers": {
"docker-runner": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-stdio", "http://localhost:8001/mcp"]
}
}
}
Cursor: Add to Cursor settings under "MCP Servers":
- Name:
docker-runner
- URL:
http://localhost:8001/mcp
Other MCP-compatible clients:
Configure MCP server URL as http://localhost:8001/mcp
.
- run_container: Execute Docker containers
- create_volume: Create Docker volumes
- docker_health: Check Docker environment status
- list_containers: List Docker containers
- list_images: List Docker images
Execute one-off docker container
Use the following curl
command to make a POST request to the /run
endpoint. Replace the placeholders with your actual image details and authentication information.
{
"image": "your-registry/your-image:tag",
"command": ["echo", "Hello, World!"],
"env_vars": {
"MY_VAR": "value"
},
"pull_policy": "always"
}
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
pull_policy | string | No | "always" | Image pull policy. Possible values: "always" (always pull image), "never" (use local image only) |
The API will return a JSON response with the stdout
and stderr
output from the container:
{
"status": "success",
"stdout": "Hello, World!\n",
"stderr": ""
}
This example demonstrates how to use the API to run a one-off Docker container with specified image, command, environment variables, and authentication information. The response will include the standard output and standard error from the executed command within the container.
Create Docker volume
$ tar czf tmp.tar.gz target_dir
$ base64 < tmp.tar.gz
Create my-volume
Docker volume with content.
{
"name": "my-volume",
"content": "H4sIAIQOfmYAA+2TMQ7DIAxFcxRO0Biw4TxIabYsjSPl+HUhStWFjdCqfgxeLPHh6U+J0zi0BQAikckzlAkOyzwwFoOPCBEoGLBOzmCoca7MtnJ6SBTelrm2J2tzbeF4xzl/hOnln+8r2xvv3OYO+Y+AWPNPb/8RxL+PVvxDmzif/Ln/rL53CKUbZ//dt/Tfl/6T9v8KsvreIRRFUZTLeQL28PKYAA4AAA=="
}
{
"status": "success",
"detail": "success"
}
For Docker Hub, you can use your Docker Hub username and password:
{
"image": "your-dockerhub-repo/your-image:tag",
"auth_config": {
"username": "your-username",
"password": "your-password",
"email": "your-email@example.com",
"serveraddress": "https://index.docker.io/v1/"
}
}
For Google Container Registry, you need to use a service account key. Here is how to set it up:
-
Create a service account in the Google Cloud Console:
- Go to the Google Cloud Console.
- Navigate to IAM & Admin > Service Accounts.
- Click on + CREATE SERVICE ACCOUNT at the top.
- Enter a name for the service account and click CREATE AND CONTINUE.
- Grant the service account the necessary permissions (e.g.,
Storage Admin
for accessing GCR). - Click DONE.
-
Download the service account key as a JSON file:
- Find the created service account in the list.
- Click the Actions (three dots) button and select Manage keys.
- Click on ADD KEY > Create new key.
- Select JSON and click CREATE.
- The JSON file will be downloaded to your computer.
-
Encode the JSON key file in base64:
- Open the downloaded JSON file.
- Encode the entire contents of this JSON file to base64.
For example, on a Unix-based system (Linux, macOS), you can use the following command to encode the JSON file:
base64 /path/to/your-service-account-file.json
On Windows, you can use PowerShell:
[Convert]::ToBase64String([System.IO.File]::ReadAllBytes("path\to\your-service-account-file.json"))
- Use the base64 encoded string for authentication:
- Use the base64 encoded string as the
password
in theauth_config
.
- Use the base64 encoded string as the
Here is an example of what the service account JSON key file might look like before encoding:
{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "somekeyid",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "your-service-account-email@your-project-id.iam.gserviceaccount.com",
"client_id": "someclientid",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-service-account-email%40your-project-id.iam.gserviceaccount.com"
}
Use the base64 encoded string as shown below:
{
"image": "gcr.io/your-project/your-image:tag",
"auth_config": {
"username": "_json_key",
"password": "your-base64-encoded-service-account-json-key-content",
"email": "your-service-account-email@your-project-id.iam.gserviceaccount.com",
"serveraddress": "https://gcr.io"
}
}
In this example, replace your-base64-encoded-service-account-json-key-content
and other placeholder values with the actual base64 encoded string and other values from your downloaded service account JSON file.
For GitHub Container Registry, you need to use your GitHub username and a Personal Access Token (PAT):
1. Generate a Personal Access Token in your GitHub account settings with the read:packages scope.
2. Use your GitHub username and the generated PAT for authentication.
{
"image": "ghcr.io/your-username/your-image:tag",
"auth_config": {
"username": "your-github-username",
"password": "your-github-pat",
"email": "your-email@example.com",
"serveraddress": "https://ghcr.io"
}
}
# Start both REST API and MCP servers
./start.sh
# Or with uv
uv run ./start.sh
# Or manually
uv run python main.py & # REST API on port 8000
uv run python mcp.py & # MCP Server on port 8001
Output:
Starting Docker Runner servers...
- REST API (main.py) on port 8000
- MCP Server (mcp.py) on port 8001
# Check server health
curl http://localhost:8000/health
# Run a simple container
curl -X POST http://localhost:8000/run \
-H "Content-Type: application/json" \
-d '{
"image": "alpine:latest",
"command": ["echo", "Hello from REST API!"],
"pull_policy": "always"
}'
# Run container with environment variables
curl -X POST http://localhost:8000/run \
-H "Content-Type: application/json" \
-d '{
"image": "alpine:latest",
"command": ["sh", "-c", "echo \"Env var: $TEST_VAR\""],
"env_vars": {"TEST_VAR": "production"},
"pull_policy": "always"
}'
Test the MCP Streamable HTTP implementation:
# Initialize MCP connection
curl -X POST "http://localhost:8001/mcp" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {"tools": {}},
"clientInfo": {"name": "test-client", "version": "1.0.0"}
}
}'
# List available tools
curl -X POST "http://localhost:8001/mcp" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}'
# Run a container via MCP
curl -X POST "http://localhost:8001/mcp" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "run_container",
"arguments": {
"image": "alpine:latest",
"command": ["echo", "Hello from MCP!"]
}
}
}'
# Test SSE (Server-Sent Events) response
curl -X POST "http://localhost:8001/mcp" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "docker_health",
"arguments": {}
}
}'
The MCP server is compatible with standard MCP clients:
n8n Integration:
- Add MCP Client Tool node to your workflow
- Configure MCP Endpoint:
http://localhost:8001/mcp
- Set Authentication to None
- Select Tools to Include: All
- Use the available tools (run_container, create_volume, docker_health, list_containers, list_images) in your workflows
Claude Desktop/Cursor:
Configure the MCP server in your client settings using the endpoint http://localhost:8001/mcp
to enable AI-powered Docker container management.
The dual-server architecture provides:
-
REST API Server (port 8000): Traditional HTTP REST API for direct integration
- FastAPI with automatic OpenAPI documentation at
/docs
- Endpoints:
/run
,/volume
,/health
- Direct Docker container execution
- FastAPI with automatic OpenAPI documentation at
-
MCP Server (port 8001): Model Context Protocol for AI agent integration
- MCP Streamable HTTP (2024-11-05 specification)
- Single endpoint:
/mcp
for all MCP communication - Dual Response Types: JSON and SSE based on
Accept
header - Session Management:
Mcp-Session-Id
header support - Tools: run_container, create_volume, docker_health, list_containers, list_images
-
Unified Management: Both servers managed via
start.sh
script- Concurrent execution with proper signal handling
- Graceful shutdown for both processes
- Docker integration with shared socket mounting
This design provides maximum flexibility, allowing direct REST API usage for traditional integrations while offering full MCP compatibility for AI-powered workflows through dedicated, standards-compliant transport.