viberun is a shell-first, agent-native app host. Run viberun to open the shell, then vibe <app> to jump into a persistent Ubuntu container on a remote host. App data is stored under /home/viberun and survives container restarts or image updates.
You need a local machine with ssh and a reachable Ubuntu host (VM or server) that you can SSH into with sudo access.
curl -fsSL https://viberun.sh | bashVerify:
viberun --versionOptional overrides (advanced):
curl -fsSL https://viberun.sh | bash -s -- --dev
curl -fsSL https://viberun.sh | bash -s -- --dir ~/.local/bin --bin viberunDev channel (latest main):
npx -y viberun@dev
uvx viberun-dev
curl -fsSL https://viberun.sh | bash -s -- --devOr with env vars:
curl -fsSL https://viberun.sh | VIBERUN_INSTALL_DIR=~/.local/bin VIBERUN_INSTALL_BIN=viberun bashStart the shell:
viberunIf this is the first time, you'll see a setup banner. Type:
setupEnter your SSH login (for example, user@myhost in ~/.ssh/config).
You can also run setup directly from the CLI:
viberun setup myhostInside the shell:
vibe hello-worldIf this is the first run, the server will create the container.
Detach without stopping the agent: Ctrl-\ . Reattach later with vibe hello-world.
Paste clipboard images into the session with Ctrl-V; viberun uploads the image and inserts a /tmp/viberun-clip-*.png path.
Create a beautiful hello-world web app with a simple, tasteful landing page.
From the shell:
open hello-worldWhen the shell starts, viberun forwards app ports automatically. open prefers a public URL if configured; otherwise it uses the localhost URL.
Shell commands (inside viberun):
vibe myapp
open myapp
apps
app myapp
rm myappCLI commands (advanced):
viberun setup [<host>]
viberun wipe [<host>]Table of contents
Git, SSH, and the GitHub CLI are installed in containers. viberun seeds git config --global user.name and user.email from your local Git config into a host-managed config file that is mounted into each app container and applied on startup. This removes the common "first commit" setup step without auto-authing you.
Choose one of these auth paths:
- SSH (agent forwarding): Start viberun with
VIBERUN_FORWARD_AGENT=1 viberun, thenvibe <app>. For existing apps, runapp <app>followed byupdateonce to recreate the container with the agent socket mounted. Thenssh -T git@github.cominside the container to verify access. - HTTPS (GitHub CLI): Run
gh auth loginand choose HTTPS, thengh auth setup-git. Verify withgh auth status.
If you update your local Git identity later, restart the app container (or run app <app> then update) to re-apply the new values on startup.
This repo is Go-first and uses mise for tool and task orchestration.
mise installmise exec -- go build ./cmd/viberun
mise exec -- go build ./cmd/viberun-servermise exec -- go run ./cmd/viberun -- --help
mise exec -- go run ./cmd/viberun-server -- --helpmise exec -- go test ./...
mise exec -- go vet ./...mise run build:image
# fallback: docker build -t viberun .
# proxy image (Caddy + auth):
docker build -f Dockerfile.proxy -t viberun-proxy .bin/viberun-e2e-local
bin/viberun-integrationWhen you run viberun via go run (or set VIBERUN_DEV=1), setup defaults to staging the local server binary and building the container image locally. Run:
viberun
setupOr directly:
viberun setup myhostUnder the hood, it builds a viberun:dev image for the host architecture, streams it over ssh with docker save | docker load, and tags it as viberun:latest on the host.
For the full build/test/E2E flow, see DEVELOPMENT.md.
viberun (shell)
-> vibe <app>
-> ssh <host>
-> viberun-server gateway (mux)
-> viberun-server <app>
-> docker container viberun-<app>
-> agent session (tmux)
container port 8080
-> host port (assigned per app)
-> mux forward -> http://localhost:<port>
-> (optional) host proxy (Caddy)
-> https://<app>.<domain>
- Client:
viberunCLI on your machine. - Server:
viberun-server gatewayexecuted on the host via SSH (no long-running daemon required). - Container:
viberun-<app>Docker container built from theviberun:latestimage. - Agent: runs inside the container in a tmux session (default provider:
codex). - Host RPC: local Unix socket used by the container to request snapshot/restore operations.
- Proxy (optional):
viberun-proxy(Caddy +viberun-auth) for app URLs and login.
viberunconnects to the host (from your saved config) and starts theviberun-server gatewayover SSH.vibe <app>creates the container if needed, or starts it if it already exists.- The agent process is attached via
docker execinside a tmux session so it persists across disconnects. viberunforwards app ports when the shell starts so you can openhttp://localhost:<port>immediately.
The setup script (run on the host) does the following:
- Verifies the host is Ubuntu.
- Installs Docker (if missing) and enables it.
- Installs Btrfs tools (
btrfs-progs) for volume snapshots. - Pulls the
viberuncontainer image from GHCR (unless using local image mode). - Downloads and installs the
viberun-serverbinary.
If setup is run from a TTY, it will offer to set up a public domain name (same as proxy setup in the shell).
Useful setup overrides:
VIBERUN_SERVER_REPO: GitHub repo for releases (defaultshayne/viberun).VIBERUN_SERVER_VERSION: release tag orlatest.VIBERUN_IMAGE: container image override.VIBERUN_PROXY_IMAGE: proxy container image override (for app URLs).VIBERUN_SERVER_INSTALL_DIR: install directory on the host.VIBERUN_SERVER_BIN: server binary name on the host.VIBERUN_SERVER_LOCAL_PATH: use a local server binary staged over SSH.VIBERUN_SKIP_IMAGE_PULL: skip pulling from GHCR (used for local builds).
- Each app container exposes port
8080internally. - The host port is assigned per app (starting at
8080) and stored in the host server state. viberunforwards app ports when the shell starts sohttp://localhost:<port>connects to the host port.- If the proxy is configured, apps can also be served over HTTPS at
https://<app>.<domain>(or a custom domain). Access requires login by default and can be made public per app.
viberun can optionally expose apps via a host-side proxy (Caddy + viberun-auth).
Set it up once per host (inside the shell):
proxy setup [<host>]You'll be prompted for a base domain and public IP (for DNS), plus a primary username/password. Create an A record (or wildcard) pointing to the host's public IP.
After setup, in the shell:
app <app>thenurlshows the current URL and access status.url publicorurl privatetoggles access (default requires login).url set-domain <domain>orurl reset-domainmanages custom domains.url disableorurl enableturns the URL off/on.url openopens the URL in your browser.usersmanages login accounts;app <app>thenuserscontrols who can access the app.
If URL settings change, run app <app> then update to refresh VIBERUN_PUBLIC_URL and VIBERUN_PUBLIC_DOMAIN inside the container.
Snapshots are Btrfs subvolume snapshots of the app's /home/viberun volume (auto-incremented versions).
On the host, each app uses a loop-backed Btrfs file under /var/lib/viberun/apps/<app>/home.btrfs.
app <app>thensnapshotcreates the nextvNsnapshot.app <app>thensnapshotslists versions with timestamps.app <app>thenrestore <vN|latest>restores from a snapshot (latestpicks the highestvN).rm <app>removes the container, the app volume + snapshots, and the host RPC directory.
Restore details:
- The host stops the container (if running) to safely unmount the volume.
- The current
@homesubvolume is replaced by a new writable snapshot of@snapshots/vN. - The container is started again, and s6 reloads services from
/home/viberun/.local/services.
When you open a session, the server creates a Unix socket on the host and mounts it into the container at /var/run/viberun-hostrpc. The container uses it to request snapshot and restore operations. Access is protected by a per-session token file mounted alongside the socket.
Local config lives at ~/.config/viberun/config.toml (or $XDG_CONFIG_HOME/viberun/config.toml) and stores:
default_hostagent_providerhosts(alias mapping)
Host server state lives at ~/.config/viberun/server-state.json (or $XDG_CONFIG_HOME/viberun/server-state.json) and stores the port mapping for each app.
Proxy config (when enabled) lives at /var/lib/viberun/proxy.toml (or $VIBERUN_PROXY_CONFIG_PATH) and stores the base domain, access rules, and users.
When enabled, the server injects VIBERUN_PUBLIC_URL and VIBERUN_PUBLIC_DOMAIN into containers.
Supported agent providers:
codex(default)claudegeminiampcode(alias:amp)opencode
Custom agents can be run via npx:<pkg> or uvx:<pkg> (for example, set config set agent npx:@sourcegraph/amp@latest in the shell).
Set the default agent with config set agent <provider> in the shell.
To forward your local SSH agent into the container, start viberun with VIBERUN_FORWARD_AGENT=1 viberun. For existing apps, run app <app> then update once to recreate the container with the agent socket mounted.
Base skills are shipped in /opt/viberun/skills and symlinked into each agent's skills directory. User skills can be added directly to the agent-specific skills directory under /home/viberun.
- All control traffic goes over the mux over SSH; the server is invoked on demand and does not expose a network port.
- The host RPC socket is local-only and protected by filesystem permissions and a per-session token.
- Containers are isolated by Docker and only the app port is exposed.
- App URLs are optional: the proxy requires login by default and can be made public per app with
app <app>thenurl public.
viberun wipe [<host>] deletes local config and wipes all viberun data on the host.
It requires a TTY and asks you to type WIPE.
On the host, wipe removes:
- All containers named
viberun-*, any containers usingviberunimages, and the proxy container (defaultviberun-proxy). - All
viberunimages (including the proxy image). - App data and snapshots under
/var/lib/viberun(including per-app Btrfs volumes). - Host RPC sockets in
/tmp/viberun-hostrpcand/var/run/viberun-hostrpc. /etc/viberun,/etc/sudoers.d/viberun-server, and/usr/local/bin/viberun-server.
Locally, it removes ~/.config/viberun/config.toml (and legacy config if present).
cmd/: Go entrypoints (viberun,viberun-server,viberun-auth).internal/: Core packages (config, server state, SSH args, target parsing, TUI helpers).bin/: Helper scripts for installs, integration/E2E flows, and container utilities.skills/: Codex skill definitions used inside containers.config/: Shell/TMUX/Starship config, auth assets, and runtime configs.Dockerfile: Base container image definition.Dockerfile.proxy: Proxy image definition (Caddy + auth).
unsupported OS: ... expected ubuntu: setup currently supports Ubuntu only.docker is required but was not found in PATH: install Docker on the host or rerun setup.missing btrfs on host: rerun setup to installbtrfs-progsand ensure sudo access.no host provided and no default host configured: runsetupin the shell orviberun setup myhost.container image architecture mismatch: delete and recreate the app (rm <app>).proxy is not configured: runproxy setup(then retryapp <app>andurl).
