HaaS is an environment harness service for AI agents. It provides a REST API and an MCP server to spin up isolated Docker containers on-demand, giving agents a full machine to work with — complete with shell access, file management, and automatic lifecycle cleanup.
The premise is simple: AI agents work better when they have a real environment to operate in. Instead of sandboxed snippets or simulated shells, HaaS gives each agent its own container with a real filesystem, real networking, and real command execution — then cleans it up when the agent is done.
graph TB
Agent["🤖 AI Agent"]
HaaS["HaaS API Server<br/>:8080"]
MCP["MCP Server<br/>:8091"]
Docker["Docker Daemon"]
Agent -->|REST API| HaaS
Agent -->|MCP / Streamable HTTP| MCP
MCP -->|REST API| HaaS
HaaS -->|Docker SDK| Docker
subgraph Containers["Managed Containers"]
C1["env_a1b2c3d4<br/>ubuntu:22.04"]
C2["env_e5f6g7h8<br/>python:3.12"]
C3["env_i9j0k1l2<br/>node:20"]
end
Docker --- C1
Docker --- C2
Docker --- C3
Reaper["⏰ Reaper<br/>cleanup every 30s"]
Reaper -->|stop expired| Docker
Reaper -.->|check idle/lifetime| HaaS
graph TD
Main["cmd/haas/main.go<br/>Bootstrap & Graceful Shutdown"]
Main --> API
Main --> MCP
Main --> Lifecycle
Main --> Config
Main --> Engine
subgraph API["API Layer (internal/api)"]
Router["Router & Middleware"]
EnvH["Environments Handler<br/>CRUD"]
ExecH["Exec Handler<br/>Streaming NDJSON"]
FilesH["Files Handler<br/>List / Read / Write"]
Router --> EnvH
Router --> ExecH
Router --> FilesH
end
subgraph MCP["MCP Server (internal/mcpserver)"]
MCPSrv["MCP Server<br/>Streamable HTTP"]
Tools["Tool Handlers<br/>8 haas_* tools"]
Resources["Resource Handlers<br/>haas://environments"]
MCPClient["HaaS HTTP Client"]
MCPSrv --> Tools
MCPSrv --> Resources
Tools --> MCPClient
Resources --> MCPClient
end
subgraph Lifecycle["Lifecycle (internal/lifecycle)"]
Reaper["Reaper<br/>Background cleanup"]
end
subgraph Engine["Engine (internal/engine)"]
EngineI["Engine Interface"]
DockerE["DockerEngine"]
MockE["MockEngine"]
EngineI --> DockerE
EngineI --> MockE
end
subgraph Store["Store (internal/store)"]
StoreI["Store Interface"]
MemStore["MemoryStore"]
StoreI --> MemStore
end
Config["Config<br/>env vars"]
EnvH --> StoreI
EnvH --> EngineI
ExecH --> StoreI
ExecH --> EngineI
FilesH --> StoreI
FilesH --> EngineI
Reaper --> StoreI
Reaper --> EngineI
DockerE --> DockerD["Docker Daemon"]
MCPClient --> Router
stateDiagram-v2
[*] --> creating : POST /v1/environments
creating --> running : Container started
running --> stopping : DELETE or Reaper
stopping --> destroyed : Container removed
destroyed --> [*]
note left of running
exec / file ops reset idle timer
Idle timeout: 10 min (default)
Max lifetime: 60 min (default)
end note
sequenceDiagram
participant Agent
participant HaaS
participant Container
Agent->>HaaS: POST /v1/environments/{id}/exec<br/>{"command": ["bash", "-c", "ls -la"]}
HaaS->>Container: Docker exec create + attach
loop NDJSON stream
Container-->>HaaS: stdout/stderr chunks
HaaS-->>Agent: {"stream":"stdout","data":"..."}
end
Container-->>HaaS: Process exits
HaaS-->>Agent: {"stream":"exit","data":"0"}
- Go 1.22+
- Docker running locally
# Build REST server
make build
# Build MCP standalone binary
make build-mcp
# Run (starts both REST API on :8080 and MCP server on :8091)
make runHaaS requires an API key before starting. Add it to a .env file in the project root:
HAAS_API_KEYS=your-secret-keyMultiple keys are supported (comma-separated):
HAAS_API_KEYS=key-for-agent-1,key-for-agent-2The same keys are used to authenticate both the REST API and the MCP server.
# REST API
curl http://localhost:8080/healthz
# {"status":"ok"}
# MCP server
curl -X POST http://localhost:8091/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret-key" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'HaaS supports two integration paths. Clients can use either or both.
Direct HTTP calls. Use this when you control the backend and want to implement your own tool execution loop with the Anthropic SDK.
All requests to /v1/environments require a Bearer token:
curl -H "Authorization: Bearer your-secret-key" http://localhost:8080/v1/environmentsSee the API Reference below.
The MCP server starts automatically alongside the REST API on :8091. It exposes all HaaS operations as MCP tools that AI models can call natively.
Tools exposed:
| Tool | Description |
|---|---|
haas_create_environment |
Spin up a new container |
haas_list_environments |
List active environments |
haas_get_environment |
Get environment details |
haas_destroy_environment |
Destroy an environment |
haas_exec |
Run a command, returns stdout/stderr/exit code |
haas_list_files |
List files at a path |
haas_read_file |
Read a file |
haas_write_file |
Write a file |
Resources exposed:
| URI | Description |
|---|---|
haas://environments |
Live list of all active environments |
haas://environments/{id} |
Details of a specific environment |
Add to .vscode/mcp.json:
{
"servers": {
"Haas": {
"url": "http://localhost:8091",
"type": "http",
"headers": {
"Authorization": "Bearer your-secret-key"
}
}
}
}Build the standalone binary and configure Claude Desktop:
make build-mcpAdd to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"haas": {
"command": "/absolute/path/to/haas/bin/haas-mcp",
"env": {
"HAAS_URL": "http://localhost:8080",
"HAAS_API_KEY": "your-secret-key"
}
}
}
}Expose the MCP server publicly (e.g. via ngrok), then pass it directly to the Anthropic SDK:
# Terminal 1
ngrok http 8091
# Terminal 2 — start HaaS
make runconst response = await anthropic.beta.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 8096,
tools: [{
type: "mcp",
server_label: "haas",
server_url: "https://your-ngrok-url",
}],
messages: [{ role: "user", content: userMessage }],
betas: ["mcp-client-2025-04-04"],
});| Method | Path | Description |
|---|---|---|
GET |
/healthz |
Health check (no auth required) |
POST |
/v1/environments |
Create a new environment |
GET |
/v1/environments |
List all environments |
GET |
/v1/environments/{id} |
Get environment details |
DELETE |
/v1/environments/{id} |
Destroy an environment |
POST |
/v1/environments/{id}/exec |
Execute a command (NDJSON stream) |
GET |
/v1/environments/{id}/exec/ws |
Interactive terminal session (WebSocket) |
GET |
/v1/environments/{id}/files?path= |
List files at path |
GET |
/v1/environments/{id}/files/content?path= |
Download a file |
PUT |
/v1/environments/{id}/files/content?path= |
Upload a file |
curl -X POST http://localhost:8080/v1/environments \
-H "Authorization: Bearer your-secret-key" \
-H "Content-Type: application/json" \
-d '{
"image": "ubuntu:22.04",
"cpu": 1.0,
"memory_mb": 2048,
"network_policy": "full"
}'Response:
{
"id": "env_a1b2c3d4",
"status": "running",
"image": "ubuntu:22.04"
}curl -X POST http://localhost:8080/v1/environments/env_a1b2c3d4/exec \
-H "Authorization: Bearer your-secret-key" \
-H "Content-Type: application/json" \
-d '{"command": ["bash", "-c", "echo hello world"], "timeout_seconds": 30}'Response (NDJSON stream):
{"stream":"stdout","data":"hello world\n"}
{"stream":"exit","data":"0"}
Connect to GET /v1/environments/{id}/exec/ws to open a live, bidirectional terminal session. The connection is a standard WebSocket (ws:// or wss://).
Query parameters:
| Parameter | Default | Description |
|---|---|---|
cmd |
bash |
Command to run (repeatable for arguments, e.g. ?cmd=python3&cmd=script.py) |
working_dir |
(container default) | Working directory inside the container |
Client → Server messages:
{"type": "input", "data": "ls -la\n"}
{"type": "resize", "cols": 120, "rows": 40}Server → Client messages:
{"stream": "output", "data": "total 0\n..."}
{"stream": "exit", "data": "0"}
{"stream": "error", "data": "failed to start session"}TTY mode merges stdout and stderr into a single
outputstream.
Example using websocat:
websocat "ws://localhost:8080/v1/environments/env_a1b2c3d4/exec/ws?cmd=bash" \
-H "Authorization: Bearer your-secret-key"| Variable | Default | Description |
|---|---|---|
HAAS_API_KEYS |
(required) | Comma-separated list of valid API keys |
HAAS_LISTEN_ADDR |
:8080 |
REST API bind address |
DOCKER_HOST |
(auto) | Docker daemon socket |
HAAS_DEFAULT_CPU |
1.0 |
Default CPU cores per container |
HAAS_DEFAULT_MEMORY_MB |
2048 |
Default memory (MB) |
HAAS_DEFAULT_DISK_MB |
4096 |
Default disk (MB) |
HAAS_IDLE_TIMEOUT |
10m |
Idle time before reaping |
HAAS_MAX_LIFETIME |
60m |
Maximum container lifetime |
HAAS_DEFAULT_NETWORK_POLICY |
none |
Default network policy |
HAAS_MAX_FILE_UPLOAD_MB |
100 |
Max file upload size (MB) |
HAAS_ALLOWED_IMAGES |
(all allowed) | Comma-separated allowlist of permitted Docker images (e.g. ubuntu:22.04,python:3.12) |
HAAS_DB_URL |
(in-memory) | Database URL for persistent storage. sqlite:///path/to/haas.db for local dev, postgres://user:pass@host/db for production |
HAAS_MCP_LISTEN_ADDR |
:8091 |
MCP server bind address |
HAAS_MCP_REST_URL |
(derived) | URL the MCP server uses to call the REST API — override for containerised deployments |
Every container is hardened:
- No privileged mode — containers run unprivileged
- All capabilities dropped — only
NET_BIND_SERVICEadded when networking is enabled no-new-privileges— prevents privilege escalation- PID limit: 256 — prevents fork bombs
- Memory hard limit — no swap, enforced ceiling
- CPU limit — capped cores via NanoCPUs
- Network isolation —
none,egress-limited, orfull
The MCP server requires the same Bearer token as the REST API. Requests without a valid Authorization: Bearer <key> header are rejected with 401.
| Policy | Behavior |
|---|---|
none |
Complete network isolation — no inbound or outbound |
egress-limited |
Bridge networking (MVP — production would use iptables rules) |
full |
Full bridge networking — unrestricted access |
make build # Build REST server binary → bin/haas
make build-mcp # Build standalone MCP binary → bin/haas-mcp
make run # Run REST + MCP servers
make run-mcp # Run standalone MCP binary (stdio)
make test # Run unit tests
make test-integration # Run integration tests (requires Docker)
make lint # Run golangci-lint
make clean # Remove build artifacts
make deps # Tidy go moduleshaas/
├── cmd/
│ ├── haas/main.go # REST API + embedded MCP server entry point
│ └── haas-mcp/main.go # Standalone MCP server (stdio / SSE / HTTP)
├── internal/
│ ├── api/ # HTTP handlers & middleware
│ ├── config/ # Environment-variable config
│ ├── domain/ # Core types (Environment, ExecRequest, etc.)
│ ├── engine/ # Container runtime abstraction (Docker)
│ ├── lifecycle/ # Reaper — automatic container cleanup
│ ├── mcpserver/ # MCP server (tools, resources, auth, transports)
│ └── store/ # State persistence (in-memory)
├── pkg/apitypes/ # Public request/response types for SDKs
└── test/ # Integration tests & test utilities
- Go SDK — Client library in
pkg/sdk/using the types already defined inpkg/apitypes - Python SDK — For Python-based agent frameworks (LangChain, CrewAI, etc.) —
haas-py - TypeScript SDK — For JS/TS agent frameworks
- MCP Server — Model Context Protocol server so agents can use HaaS tools natively
- Persistent storage — Swap
MemoryStorefor a database-backed implementation - Image allowlist — Restrict which Docker images can be used
- Auth & API keys — Bearer token authentication via
HAAS_API_KEYS - Egress firewall — Proper iptables rules for
egress-limitednetwork policy - WebSocket exec — Interactive terminal sessions over WebSocket
- Container snapshots — Save and restore environment state
MIT License.
Made with ❤️ and Claude by Danilo.
