Like E2B but self-hosted. Like Docker but actually isolated. Like Daytona but one binary.
One tool. Every isolation level. Every platform.
On a Mac? Docker provider, no KVM needed.
On bare metal? Firecracker microVMs in ~28ms.
On Kubernetes? gVisor or Kata containers.
Need 100 sandboxes but only have 20 VMs? Pool mode.
Need to expose localhost:3000 from inside the sandbox? Live preview, one method call.
Self-hosted. Single binary. Python & TypeScript SDKs. MIT licensed. No cloud required.
Quick Start • Why StacyVM • Providers • Live Preview • Pool Mode • API Reference • Contributing
- Table of contents
- Quick start (30 seconds)
- Why StacyVM?
- Pick your isolation level
- Live Preview
- Pool mode — the feature nobody else has
- SDKs
- REST API
- CLI
- Configuration
- Templates
- Security defaults
- Architecture
- Web Dashboard
- Install options
- Project layout
- Roadmap
- Contributing
- License
git clone https://github.com/StacyOs/stacyvm && cd stacyvm
./scripts/setup.sh
# Start StacyVM and Traefik (Traefik powers Live Previews)
docker compose up -d
# Or run StacyVM locally without Docker:
# ./stacyvm servepip install stacyvm # Python
npm install stacyvm # TypeScriptfrom stacyvm import Client
client = Client("http://localhost:7423")
sandbox = client.spawn(image="python:3.12")
result = sandbox.exec('python3 -c "print(\'hello from my own computer\')"')
print(result.stdout) # hello from my own computer
sandbox.destroy() # gone. forever.7 lines. Your AI agent now has a real, isolated machine it can use and throw away.
You're building an AI agent. It generates code. That code needs to run somewhere safe.
The problem:
- Docker shares the host kernel. One container escape and your machine is owned. Multiple runc CVEs in 2024-2025 proved this isn't theoretical.
- Cloud sandboxes (E2B, Modal) send your code and data to someone else's servers. Adds latency, costs money, and you lose control of your data.
- Daytona is self-hostable but needs 12 services (PostgreSQL, Redis, MinIO, Dex, registry...) just to get started.
- Zeroboot is blazing fast (~0.8ms) but strips everything — no networking, no filesystem, no multi-vCPU, serial-only I/O. Built for "run a function, get a result."
StacyVM is one binary. Self-hosted. Boots a sandbox in ~28ms. Your data never leaves your machine. And you choose the isolation level — Docker containers for dev, gVisor for cloud VMs, Firecracker microVMs for maximum hardware-level security.
| StacyVM | E2B | Zeroboot | Daytona | Modal | Raw Docker | |
|---|---|---|---|---|---|---|
| Self-hosted | ✅ | ❌ Cloud only | ✅ | ✅ (12 services) | ❌ Cloud only | ✅ |
| Isolation | KVM + gVisor + Docker | Container | KVM only | Container | Container | Shared kernel |
| Cold boot | ~28ms (snapshot) | ~500ms | ~0.8ms (CoW fork) | Seconds | Seconds | ~200ms |
| Networking | ✅ | ✅ | ❌ Serial only | ✅ | ✅ | ✅ |
| Filesystem / disk I/O | ✅ | ✅ | ❌ Memory only | ✅ | ✅ | ✅ |
| Multi-vCPU | ✅ | ✅ | ❌ Single vCPU | ✅ | ✅ | ✅ |
| Multiple providers | ✅ KVM/Docker/gVisor | ❌ | ❌ KVM only | ❌ | ❌ | N/A |
| Runs without KVM | ✅ Docker provider | N/A | ❌ | ✅ | N/A | ✅ |
| Multi-user pool mode | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Live preview URLs | ✅ Built-in (Traefik) | ✅ | ❌ | Partial | ✅ | ❌ |
| File API (read/write/glob) | ✅ 9 methods | ✅ | ❌ | ❌ | ❌ | ❌ |
| Python + TS SDKs | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ |
| Your data stays local | ✅ | ❌ | ✅ | ✅ | ❌ | ✅ |
| License | MIT | Partial | Apache 2.0 | Apache 2.0 | Proprietary | N/A |
On speed: Zeroboot's 0.8ms is real — they bypass Firecracker's VMM entirely and
mmap(MAP_PRIVATE)the snapshot memory as copy-on-write. But there's no disk, no network, and I/O is serial UART only. StacyVM's 28ms gives you a full sandbox with networking, file system, virtio, and multi-vCPU. Different tools for different jobs.
E2B charges per second. Default sandbox = 2 vCPU + 512 MiB RAM:
2 vCPU: $0.000028/s
512 MiB: $0.0000045/GiB/s × 0.5 GiB = $0.00000225/s
─────────────────────────────────────────
Total: $0.00003025/s = $0.109/hour per sandbox
| Concurrent sandboxes | E2B / month | StacyVM pool mode |
|---|---|---|
| 10 | $261 compute + $150 plan = $411 | $0 |
| 50 | $1,307 compute + $150 plan = $1,457 | $0 |
| 100 | $2,614 compute + $150 plan = $2,764 | $0 |
Assumes 8h/day active. StacyVM pool mode: 5 users per VM, your own infra.
StacyVM has a provider interface. One config change swaps the entire backend. Your application code doesn't change.
# stacyvm.yaml — change one line
providers:
default: "docker" # or "firecracker", "e2b", "custom", "proot", "mock"
docker:
runtime: "runc" # or "runsc" (gVisor) or "kata-runtime"| Provider | What it does | KVM? | Boot | Use when |
|---|---|---|---|---|
| Firecracker | Real microVM. Own kernel, rootfs, network. ~28ms via snapshot restore. | Yes | ~28ms | Production. Maximum isolation. |
| Docker (runc) | OCI container with seccomp, cap_drop ALL, read-only rootfs option. | No | ~200ms | Dev, CI/CD, Mac, Windows. |
| Docker (gVisor) | Same as above, but syscalls hit a user-space kernel instead of host. | No | ~400ms | Cloud VMs. Stronger than containers. |
| Docker (Kata) | Lightweight VM per container. Hardware isolation without Firecracker setup. | Yes | ~1s | Kubernetes (AKS/GKE). |
| E2B | Forwards to E2B's hosted SaaS. Useful for hybrid deployments. | N/A | ~500ms | Bursting to cloud. |
| Custom | Pluggable HTTP backend. Bring your own runtime. | N/A | Varies | Special infra (HPC, Nomad, etc.). |
| PRoot | User-space chroot. No root, no KVM, no Docker. | No | Instant | Restricted hosts (Android, shared servers). |
| Mock | Temp directories on the host. Zero overhead. | No | Instant | Testing, development. |
Every provider implements the same interface. SDKs, REST API, CLI, pool mode, live preview — all work identically regardless of backend.
Sandboxes can serve HTTP. StacyVM gives you a public URL for any port the sandbox exposes — no manual port forwarding, no SSH tunnels.
from stacyvm import Client
client = Client("http://localhost:7423")
sandbox = client.spawn(image="node:20")
sandbox.write_file("/app/server.js", "require('http').createServer((req,res)=>res.end('hi')).listen(3000)")
sandbox.exec("node /app/server.js &")
print(sandbox.get_preview_url(3000))
# http://3000-sb-a1b2c3d4.localhostconst sb = await client.spawn({ image: "node:20" });
await sb.writeFile("/app/index.js", code);
sb.exec("node /app/index.js &");
console.log(sb.getPreviewUrl(3000));
// http://3000-sb-a1b2c3d4.localhostHow it works. A bundled Traefik instance watches Docker labels. When you spawn a sandbox, StacyVM injects routing labels (Host(\3000-{id}.{domain}`)`). Traefik picks them up instantly — no restarts, no config files. Open the URL in a browser, Traefik forwards the request to the sandbox's container.
Local development:
# stacyvm.yaml
server:
preview_domain: "localhost" # browsers resolve *.localhost to 127.0.0.1docker compose up -d
# visit http://3000-sb-xyz.localhostProduction:
server:
preview_domain: "stacyide.xyz"Point a wildcard DNS record (*.stacyide.xyz → your server IP), give Traefik ports 80/443, and add an ACME resolver for Let's Encrypt. Users get HTTPS preview URLs automatically.
Full architecture write-up: docs/live-preview-architecture.md.
Live preview currently works with the Docker provider. Firecracker support is in progress (tracked on the roadmap).
Traditional sandbox tools: 1 user = 1 VM. 100 users = 100 VMs = massive bill.
StacyVM pool mode: 1 VM serves N users. Each gets an isolated /workspace/{id}/. Path traversal blocked. Optional per-user UID + PID namespace hardening.
pool:
enabled: true
max_vms: 20
max_users_per_vm: 5
image: "python:3.12-slim"
memory_mb: 2048
vcpus: 2
overflow: "reject" # or "queue"Identify users with the X-User-ID header on every request:
client = Client("http://localhost:7423", user_id="alice@example.com")const client = new Client({ baseUrl: "http://localhost:7423", userId: "alice@example.com" });Hardening knobs (Docker provider):
providers:
docker:
pool_security:
per_user_uid: true # each user gets a unique UID
pid_namespace: true # each user in a separate PID namespace
workspace_permissions: true # restrict file access between users
hidepid: true # hide other users' processes from /proc100 users → 20 VMs instead of 100. 60% less infrastructure. Same isolation guarantees.
Pool mode works with every provider — Docker containers, Firecracker microVMs, gVisor, Kata. The orchestrator handles user-to-VM assignment, workspace scoping, and cleanup automatically.
Check pool status from the SDK:
print(client.pool_status())
# {"enabled": true, "vms": 3, "max_vms": 20, "total_users": 14, "max_users_per_vm": 5}Both SDKs are thin wrappers over the REST API. Same method names, same return shapes (translated to native conventions per language).
| Python | TypeScript |
from stacyvm import Client
client = Client("http://localhost:7423")
# Context manager — auto-destroys on exit
with client.spawn(image="python:3.12") as sb:
sb.exec("pip install pandas")
sb.write_file("/app/analyze.py", code)
result = sb.exec("python3 /app/analyze.py")
print(result.stdout)
# Stream output
for chunk in sb.exec_stream("npm test"):
print(chunk.data, end="")
# Async support
from stacyvm import AsyncClient
async with AsyncClient("http://localhost:7423") as client:
sb = await client.spawn()
result = await sb.exec("whoami")
await sb.destroy() |
import { Client } from "stacyvm";
const client = new Client("http://localhost:7423");
const sb = await client.spawn({ image: "node:20" });
// Files + exec
await sb.writeFile("/app/index.js", code);
const result = await sb.exec("node /app/index.js");
console.log(result.stdout);
// Stream output in real-time
for await (const chunk of sb.execStream("npm test")) {
process.stdout.write(chunk.data);
}
// Auto-destroy with withSandbox()
await client.withSandbox({ image: "node:20" }, async (sb) => {
await sb.exec("npm test");
});
await sb.destroy(); |
pip install stacyvm # Python
npm install stacyvm # TypeScriptFull SDK references:
- Python: sdk/python/README.md
- TypeScript: sdk/js/README.md
Base URL: http://localhost:7423/api/v1
Auth: pass X-API-Key: <your-key> if auth.enabled: true. For pool mode, also send X-User-ID: <user-id>.
| Method | Endpoint | Description |
|---|---|---|
POST |
/sandboxes |
Spawn a sandbox |
GET |
/sandboxes |
List active sandboxes |
DELETE |
/sandboxes |
Prune expired sandboxes |
GET |
/sandboxes/{id} |
Get sandbox details |
DELETE |
/sandboxes/{id} |
Destroy sandbox |
POST |
/sandboxes/{id}/extend |
Extend TTL |
POST |
/sandboxes/{id}/exec |
Execute a command (sync or NDJSON stream) |
GET |
/sandboxes/{id}/exec/ws |
Execute over WebSocket |
GET |
/sandboxes/{id}/logs |
Console logs |
| Method | Endpoint | Description |
|---|---|---|
POST |
/sandboxes/{id}/files |
Write a file |
GET |
/sandboxes/{id}/files?path= |
Read a file |
DELETE |
/sandboxes/{id}/files?path= |
Delete a file (recursive=true for dirs) |
GET |
/sandboxes/{id}/files/list?path= |
List a directory |
POST |
/sandboxes/{id}/files/move |
Move/rename |
POST |
/sandboxes/{id}/files/chmod |
Change permissions |
GET |
/sandboxes/{id}/files/stat?path= |
File metadata |
GET |
/sandboxes/{id}/files/glob?pattern= |
Glob pattern matching |
| Method | Endpoint | Description |
|---|---|---|
POST |
/templates |
Create a template |
GET |
/templates |
List templates |
GET |
/templates/{name} |
Get a template |
PUT |
/templates/{name} |
Update a template |
DELETE |
/templates/{name} |
Delete a template |
POST |
/templates/{name}/spawn |
Spawn a sandbox from a template |
| Method | Endpoint | Description |
|---|---|---|
GET |
/providers |
List configured providers |
GET |
/providers/{name} |
Provider details + sandbox count |
POST |
/providers/test |
Health-check all providers |
GET |
/pool/status |
Pool VM and user counts |
GET |
/snapshots |
Available VM snapshots |
GET |
/health |
Health check |
GET |
/metrics |
Runtime metrics (goroutines, alloc, sandbox counts) |
GET |
/events |
Server-sent events stream |
Full schemas, request/response examples, and error codes: docs/api.md. OpenAPI spec: docs/swagger.yaml.
stacyvm serve # start the API server
stacyvm spawn --image python:3.12 --ttl 1h # spawn
stacyvm exec sb-a1b2c3d4 -- python3 app.py # run a command in a sandbox
stacyvm list # list active sandboxes
stacyvm kill sb-a1b2c3d4 # destroy
stacyvm build-image python:3.12 # pre-build rootfs (Firecracker)
stacyvm tui # interactive dashboard
stacyvm version # version infoGlobal flags:
--server— server URL (defaulthttp://localhost:7423)--api-key— API key (orSTACYVM_API_KEYenv var)
# stacyvm.yaml — sane defaults work without it
server:
host: "0.0.0.0"
port: 7423
preview_domain: "localhost" # used to build live-preview URLs
providers:
default: "docker"
docker:
enabled: true
socket: "unix:///var/run/docker.sock"
runtime: "runc" # or "runsc" (gVisor), "kata-runtime"
network_mode: "bridge"
read_only_rootfs: false
seccomp_profile: "default"
dropped_caps: ["ALL"]
added_caps: []
pids_limit: 256
pool_security:
per_user_uid: false
pid_namespace: false
workspace_permissions: true
hidepid: false
firecracker:
enabled: true
firecracker_path: "/usr/local/bin/firecracker"
kernel_path: "/var/lib/stacyvm/vmlinux.bin"
agent_path: "./bin/stacyvm-agent"
data_dir: "/var/lib/stacyvm"
e2b:
enabled: false
api_key: ""
base_url: "https://api.e2b.dev"
custom:
enabled: false
base_url: ""
api_key: ""
proot:
enabled: false
rootfs_path: "/var/lib/stacyvm/rootfs"
defaults:
ttl: "30m"
image: "alpine:latest"
memory_mb: 1024
vcpus: 1
auth:
enabled: false
api_key: ""
database:
path: "stacyvm.db"
logging:
level: "info" # debug | info | warn | error
format: "json" # or "pretty"
pool:
enabled: false
max_vms: 10
max_users_per_vm: 5
image: "alpine:latest"
memory_mb: 2048
vcpus: 2
overflow: "reject" # or "queue"Config priority: ./stacyvm.yaml → ~/.stacyvm/config.yaml → environment variables.
Env vars: prefix STACYVM_, dots become underscores. Examples:
STACYVM_SERVER_PORT=8080
STACYVM_PROVIDERS_DEFAULT=firecracker
STACYVM_AUTH_API_KEY=sk-xyz123
STACYVM_LOGGING_LEVEL=debugTemplates are pre-baked sandbox specs stored server-side. Define once, spawn many times.
curl -X POST http://localhost:7423/api/v1/templates \
-H 'Content-Type: application/json' \
-d '{
"name": "python-dev",
"image": "python:3.12-slim",
"memory_mb": 1024,
"vcpus": 2,
"ttl": "1h"
}'sandbox = client.spawn(template="python-dev") # spawn from template
client.templates.list() # list all
client.templates.delete("python-dev") # deleteconst sb = await client.templates.spawn("python-dev");
const all = await client.templates.list();
await client.templates.delete("python-dev");Every sandbox ships locked down. You opt in to less restriction, not out.
| Layer | Default | What it does |
|---|---|---|
| Capabilities | cap_drop: ALL |
Can't mount, ptrace, load modules, change networking |
| Syscalls | Seccomp default profile | Blocks ~44 dangerous syscalls |
| Filesystem | Read-only rootfs (Firecracker), opt-in (Docker) | Only /tmp and /workspace writable on Firecracker |
| Network | Bridge by default; none available |
Switch to network_mode: none to block outbound |
| Processes | PID limit: 256 | Fork bombs die immediately |
| User | Non-root | No root inside the sandbox |
| Lifetime | TTL auto-expiry | Forgotten sandboxes clean themselves up |
With the Firecracker provider you also get: dedicated kernel per sandbox, vsock-only host-guest communication (no TCP between host and guest), and ephemeral rootfs destroyed on teardown.
Full security model and reporting policy: SECURITY.md.
Request flow: SDK → REST API → Orchestrator (lifecycle, TTL, pool, templates) → Provider → Sandbox.
Live preview flow: Browser → Traefik → Docker label lookup → Sandbox container.
Snapshot trick: First Firecracker spawn cold-boots (~1s) and snapshots the VM state. Every spawn after that restores from snapshot in ~28ms — faster than most HTTP requests. Details in docs/snapshot-restore.md.
Built-in React dashboard for sandbox management, live terminal, file browser, and log viewer. Lives at web/.
make web # build the frontend (web/dist)
./stacyvm serve # serves the dashboard at http://localhost:7423The dashboard talks to the same REST API documented above — useful as a working reference.
One-command setup (recommended):
git clone https://github.com/StacyOs/stacyvm && cd stacyvm
./scripts/setup.sh # checks Go, Docker, KVM, downloads Firecracker + kernel, builds everything
./stacyvm serveBuild from source:
make build-all
sudo mkdir -p /var/lib/stacyvm && sudo chown $(whoami) /var/lib/stacyvm
./scripts/setup-kernel.shDocker (with Traefik for live preview):
docker compose up -d
# StacyVM: http://localhost:7423
# Traefik admin: http://localhost:8080Docker (StacyVM only):
docker build -t stacyvm .
docker run -p 7423:7423 stacyvmBinary download (when releases are available):
curl -fsSL https://github.com/StacyOs/stacyvm/releases/latest/download/stacyvm-linux-amd64 -o stacyvm
chmod +x stacyvm && sudo mv stacyvm /usr/local/bin/stacyvm/
├── cmd/ # CLI entrypoints (stacyvm, stacyvm-agent)
├── internal/ # Server, orchestrator, providers, API handlers
│ ├── api/ # HTTP handlers (chi router)
│ ├── orchestrator/ # Lifecycle, TTL, templates, pool, event bus
│ ├── providers/ # docker, firecracker, e2b, custom, proot, mock
│ └── config/ # Viper-based config loader
├── sdk/
│ ├── js/ # TypeScript SDK — see sdk/js/README.md
│ └── python/ # Python SDK — see sdk/python/README.md
├── web/ # React dashboard
├── tui/ # Terminal UI (bubbletea)
├── docs/ # Architecture docs, OpenAPI spec, API reference
├── scripts/ # setup.sh, build-rootfs.sh, install.sh, benchmarks
├── examples/ # Working code samples (js, python)
├── tests/ # Integration and provider tests
├── docker-compose.yml # StacyVM + Traefik
└── Makefile # build, test, web, release-build
- Firecracker provider (KVM microVMs, ~28ms snapshot restore)
- Docker provider (OCI containers, seccomp, no KVM needed)
- gVisor support (user-space kernel via runsc runtime)
- Pool mode (N users per VM, workspace isolation)
- Live Preview via Traefik (Docker provider)
- Python SDK + TypeScript SDK
- Web dashboard + TUI
- Template system + warm pools
- PRoot provider (root-less, KVM-less)
- E2B + custom HTTP provider
- Live Preview for Firecracker
- Kata Containers provider (K8s-native)
- Persistent volumes across sandboxes
- MCP server mode
- GPU passthrough
PRs welcome — especially for new providers, SDK improvements, and documentation. Read CONTRIBUTING.md before opening a PR. It covers the dev loop, where to put what, the test matrix, and the review process.
If you find a security issue, do not open a public issue — follow SECURITY.md.
MIT — use it however you want.
Built by StacyOs
If StacyVM helps you, drop a ⭐ — it helps others find it.
