A small, opinionated container deploy CLI + TUI for people who run a handful of services on a handful of bare-metal hosts. Sits between Kamal and Kubernetes — opinionated about the same things Kamal is, borrowing the few Kubernetes ideas that actually pay off at this scale.
yoink up # reconcile every service in dep order
yoink tui # k9s-style dashboard, drift, logs, shell-into
yoink prune # clean up stale containers
yoink history api # who deployed what, when
yoink rollback api # roll back to the previous version
Kamal is a lovely fit for "one app, one binary per host" but starts to creak the moment you want a second app on the same box, replicas of the same service, or any kind of network isolation between containers. Kubernetes solves all that — and a hundred other problems you don't have, in exchange for a control plane to operate, a YAML schema with a learning curve, and a vocabulary you have to teach every new operator.
yoink is the thinnest tool that gives you the Kubernetes ideas that actually matter at small scale, while keeping Kamal's "one binary, ssh into the host, drive Docker directly" simplicity:
- Multiple services per host. Declare them, deploy them, prune the ones that fall out of config.
- Replicas. Want two
apicontainers behind a reverse proxy? Setreplicas: 2. Healthcheck-gated rolling swap. - Per-service network tiers. Each service joins a named network; only services on the same network can dial each other. Basic blast-radius isolation without a CNI plugin.
- Dependency-ordered deploys. Services declare
depends_on:andyoink upruns them in topological-sort waves (independent services in parallel). No more "I redeployedapibeforerediswas on the new network". - Drift detection. Every effective spec (image, env, networks, mounts, options, file content) hashes deterministically and lands as a label. The TUI shows drift across the cluster without guessing.
- k9s-style TUI. A
ratatuidashboard with one-key reconcile, prune, kill, shell-into, debug-sidecar, log filter. Keyboard-only. Survives offline and slow links because it's just talking docker over ssh.
What it deliberately doesn't do:
- No control plane, no agent on the hosts. yoink is a single Rust binary on your laptop or in CI.
- No service discovery, no scheduling — Docker's network alias DNS is plenty for a few services on a host.
- No multi-cluster, no HA, no auto-scaling. One operator, one config, one deploy at a time.
- No reverse proxy, secret store, load balancer, or stateful-accessory orchestration. Use the right tool for each.
yoink doesn't try to solve "how do I reach my hosts" or "how do I route HTTPS to the right container". It expects you to bring two off-the-shelf tools that solve those completely:
yoink's transport is ssh://user@host via bollard's SSH transport, which opens an SSH tunnel and speaks the Docker Engine API over the remote daemon's Unix socket. Pairing this with Tailscale SSH:
- Hostnames work everywhere. MagicDNS gives every host a stable name (
my-server) reachable from your laptop, CI runner, anywhere on the tailnet. Nossh_configto maintain, no jump hosts, no bastion. - Auth without keys. Tailscale SSH issues short-lived certs based on tailnet membership and ACLs. Onboard a new operator: invite to the tailnet, grant ACL access to the
tag:servergroup. Done. Off-board: revoke from tailnet. Their key is gone everywhere, immediately. - CI authentication is the same flow. A GH Actions runner with
tailscale/github-actionjoins the tailnet undertag:ci; ACL grantstag:ci → tag:serverSSH; yoink connects without ever touching~/.ssh/. - No port forwarding. The Docker daemon never listens on a TCP port. The SSH transport handles auth + transport in one hop. Surface area:
:22accessible only from the tailnet.
The deploy user on each host is in the docker group (functionally root, scope your tailnet ACLs accordingly).
yoink owns the container lifecycle. It does not own routing. Public traffic landing on your hosts wants:
- TLS termination (with auto-renewing certs)
- Hostname → container routing
- Per-route headers, redirects, rate limits
caddy-docker-proxy is a Caddy plugin that watches the Docker socket for container labels and reconfigures itself live. The pairing:
services:
- name: api
image: ghcr.io/you/api
labels:
caddy: api.example.com
caddy.reverse_proxy: "{{upstreams 8080}}"
run:
port: 8080
replicas: 2
networks: [public, api]yoink up deploys the api container with those labels; the caddy container (also yoink-managed, but separately) reads them via the docker socket and routes https://api.example.com to the api containers, automatically picking up new replicas and dropping retired ones. Cert issuance is Caddy's job (Let's Encrypt or Cloudflare origin); yoink doesn't know it's happening.
Replicas plug into this naturally: caddy.reverse_proxy: "{{upstreams 8080}}" resolves all containers with the same network alias and round-robins between them. yoink's healthcheck-gated rolling swap means the caddy upstream pool is always traffic-ready.
The split:
| concern | tool |
|---|---|
| container lifecycle (pull, start, healthcheck, drain, replace) | yoink |
| dep-ordered deploys (redis before api before caddy) | yoink |
| per-tier network isolation | yoink |
| HTTPS / hostname routing / cert renewal | caddy-docker-proxy |
| operator → host connectivity | Tailscale |
| CI → host connectivity | Tailscale |
| stateful services (postgres, etc.) | docker compose on the host |
| secrets | Infisical (REST API, no CLI install needed) |
Common scenario: you want a staging environment that mirrors prod for pre-merge verification, ideally on the same hardware to avoid paying for a second machine. yoink doesn't have a built-in environment concept — instead you use two config files that don't collide.
The two things that have to differ between envs:
- Service names — Docker container names are unique per host.
apiandapi-stagingcan both run;apiandapicannot. - Network names —
api(prod tier) andapi-staging(staging tier) keep traffic separated even though both stacks live on the same docker daemon.
Everything else (volumes, hostnames, secrets, ports) follows from those two namespaces.
yoink.prod.yaml # prod entry; includes services/prod/*.yaml
yoink.staging.yaml # staging entry; includes services/staging/*.yaml
services/prod/api.yaml
services/prod/web.yaml
services/staging/api.yaml # name: api-staging, networks: [api-staging, redis-staging, ...]
services/staging/web.yaml # name: web-staging
Each config is operated independently:
yoink up --config yoink.prod.yaml # prod deploy
yoink up --config yoink.staging.yaml # staging deploy
yoink dump --config yoink.staging.yaml # staging snapshot
yoink tui --config yoink.staging.yaml # TUI scoped to staging containersThe TUI's drift detection, prune, and reconcile are all scoped to whatever --config references — yoink prune --config staging.yaml won't touch prod containers because their yoink.service labels don't match anything declared in the staging config.
The reverse proxy is the one shared piece of infra: caddy serves both envs from the same :443 listener. Routing is by hostname:
api.example.com→ containers labeled with that hostname (the prod api)staging-api.example.com→ containers labeled with the staging hostname (the staging api)
Use the host network alias to keep it clean:
# services/staging/api.yaml
- name: api-staging
image: ghcr.io/you/api
labels:
caddy: staging-api.example.com
caddy.reverse_proxy: "{{upstreams 8080}}"
run:
network_aliases: [api-staging] # different alias from prod's `api`caddy-docker-proxy picks both up and routes by Host: header. No collision.
Two options, pick per service:
- Separate instances per env:
redisandredis-stagingcontainers, both yoink-managed, on their own networks. Cheap; full isolation. - Shared instance, namespaced data: one
redis, but staging uses key prefixstaging:(app-side responsibility). Half the memory; no data isolation.
For SQL — typically a managed Postgres with separate databases (myapp and myapp_staging) under the same cluster. Yoink doesn't deploy the database; the connection strings live in env vars / secrets per env.
Staging usually wants to deploy whatever's on main, while prod tracks live. Either:
- Keep
tag:empty in the service yaml; supply via--tag api-staging=$MAIN_SHAfrom CI when deploying staging - Or commit a digest pointer in
yoink.staging.yaml(build-once-promote-many) — same image, different deploy targets
The image identity is the same across envs; only the runtime configuration differs.
brew install oddur/yoink/yoink
cargo install --git https://github.com/oddur/yoink yoink
Download from the latest release — darwin-arm64, darwin-x86_64, linux-x86_64.
One YAML file per project, by convention yoink.yaml. Service fragments can be split per-service via an include: glob. See examples/ for the full shape.
deploy:
networks: [public, api, redis, otel] # named tiers — services join the ones they need
hosts:
- { address: my-server, user: deploy } # reachable via tailnet hostname
secrets:
provider: infisical # optional; yoink talks to Infisical's REST API directly
project_id: <your-infisical-project>
environment: prod
# domain: https://infisical.example.com # only for self-hosted Infisical instances
include:
- services/*.yaml
# services/api.yaml:
services:
- name: api
image: ghcr.io/you/api
# tag: <required at deploy time via --tag api=<sha>, or set here>
depends_on: [redis]
networks: [api, redis] # can dial only services on these networks
secrets: [DATABASE_URL]
pre_deploy:
- name: api-migrate
image: ghcr.io/you/api
tag: { service: api } # mirror the runtime tag
cmd: ["migrate"]
secrets: [DATABASE_MIGRATE_URL]
run:
port: 8080
replicas: 2
healthcheck_path: /health
healthcheck_timeout: 60s
drain_timeout: 30s
options:
memory: 512m
cap_drop: [ALL]
read_only: true
tmpfs: { /tmp: "size=64m,mode=1777" }
network_aliases: [api] # caddy-docker-proxy upstream keyWhen secrets.provider: infisical is set, yoink resolves a bearer token in this order:
- Universal Auth (machine identity) —
INFISICAL_CLIENT_ID+INFISICAL_CLIENT_SECRET. The CI path; create a machine identity in Infisical's UI and expose the pair as repo secrets. - Raw bearer token —
INFISICAL_TOKEN. For one-off runs where you already have a token in hand. - Cached browser-flow login — yoink reads the session that the
infisicalCLI persists afterinfisical login. Run it once on your laptop:yoink then reuses that session — the CLI binary itself is not invoked at deploy time, only its keychain entry is read.brew install infisical/get-cli/infisical infisical login # add --domain=… for self-hosted
yoink preflight verify Docker is reachable on each host
yoink up [--service NAME]+ [--tag …]+ reconcile to spec (rolling, healthcheck-gated)
yoink prune [--dry-run] remove yoink-managed containers no longer in config
yoink rollback SERVICE [--tag <sha>] redeploy the previous version
yoink history SERVICE [--limit N] deploy history for a service (newest first)
yoink status [--json] snapshot of what's running where
yoink dump dense JSON of everything yoink can observe
yoink diff SERVICE [--tag <sha>] spec diff between live and a target tag
yoink logs SERVICE [--follow] [--tail N] stream container logs
yoink exec SERVICE -- CMD ARGS… one-shot command inside a running container
yoink shell SERVICE interactive PTY shell (k9s-style)
yoink debug SERVICE alpine debug sidecar in the target's pid+net ns
yoink restart SERVICE bounce a container without re-deploying
yoink kill SERVICE [--yes] SIGKILL a container; no graceful drain
yoink pull SERVICE [--tag <sha>] pre-warm an image on the host
yoink networks list docker networks across hosts
yoink volumes list docker volumes across hosts
yoink top htop-style snapshot of CPU/mem per container
yoink version SERVICE print currently-running tag(s)
yoink validate parse config + check for errors without acting
yoink tui [--mode dashboard|hosts|…] interactive ratatui dashboard
All commands take --config <path> (default: ./yoink.yaml). -v for structured logs to stderr.
yoink tui
Heavily inspired by k9s. Keyboard-driven panes for dashboard (drift across all services), hosts (per-host detail with live CPU/mem), services (per-service detail with replica info), container logs (live tail with / filter), per-container detail (env, mounts, healthcheck, networks).
Operator gestures from the dashboard:
↑/↓orj/k— navigateenter— drill into selected rowi— container inspectK— kill (with confirmation)U— reconcile this serviceA— reconcile all servicesP— prune!— shell into containerD— debug sidecar (alpine in target's pid+net ns)?— help overlayq— quit
cargo build --release # binary lands at target/release/yoink
cargo test --workspace # unit + integration tests
cargo clippy --all-targets -- -D warningsIn production use on a small fleet. Pre-1.0 and not (yet) widely adopted, so no semver-stability promise — but the core surface (up, status, rollback, the YAML schema) is unlikely to break.
MIT — see LICENSE.