A small collection of Dev Container Features
for Claude Code and friends, plus a lightweight nvm feature.
Published to GHCR under the sourecode/devcontainer-features namespace.
This repo also contains the Coder workspace template
(Dockerfile.workspace and main.tf)
that hosts the devcontainers these features get installed into — see
Coder workspace template.
| Feature | OCI reference | Summary |
|---|---|---|
claude-code |
ghcr.io/sourecode/devcontainer-features/claude-code:2 |
Installs the Claude Code CLI via the official native installer into /usr/local/bin. Declares ~/.claude and ~/.claude.json as persistence targets via the home-persist manifest. Requires Node.js — automatically pulls in the nvm feature via dependsOn. |
rtk |
ghcr.io/sourecode/devcontainer-features/rtk:2 |
Installs rtk, an LLM token-reducing CLI proxy, into /usr/local/bin. Auto-patches Claude Code via postCreateCommand so the hook is written against the live ~/.claude, not the image. |
context-mode |
ghcr.io/sourecode/devcontainer-features/context-mode:2 |
Installs the context-mode Claude Code plugin via postCreateCommand, so the plugin lands in ~/.claude/plugins (which home-persist symlinks into the persistence volume when installed). |
home-persist |
ghcr.io/sourecode/devcontainer-features/home-persist:1 |
Symlinks declared $HOME paths into a per-owner persistence volume at /mnt/home-persist. Features and users contribute paths via JSON manifests in /etc/devcontainer-persist.d/; an onCreateCommand resolver materializes the symlinks on every create. |
nvm |
ghcr.io/sourecode/devcontainer-features/nvm:2 |
Installs nvm system-wide at /usr/local/share/nvm and optionally a Node version (defaults to LTS), with node/npm/npx symlinked into /usr/local/bin. No yarn. |
All binaries land in /usr/local/bin (or /usr/local/share/...) rather than the user's home, so they stay image-owned. Per-user state that needs to survive rebuilds is declared explicitly via the home-persist manifest — see docs/persistence.md. rtk and context-mode declare installsAfter for both ghcr.io/sourecode/devcontainer-features/claude-code and ghcr.io/anthropics/devcontainer-features/claude-code, so the runtime orders them after whichever claude-code feature is present.
Add them to any .devcontainer/devcontainer.json, on top of whatever base
image you already use:
Features run inside the image during build (as root) and install system-wide
under /usr/local/. After rebuild, the tools are available on every user's
PATH with no home-directory footprint.
No options. Always installs the latest release. Declares dependsOn for
ghcr.io/sourecode/devcontainer-features/nvm:2, so adding claude-code to a
devcontainer automatically pulls in nvm (and therefore Node.js) even if you
don't list nvm yourself.
| Option | Type | Default | Purpose |
|---|---|---|---|
autoPatchClaude |
boolean | true |
Run rtk init -g --auto-patch to wire rtk's hook into Claude Code. No-op if the claude CLI is not on the user's PATH. |
No options. Runs via postCreateCommand, so it no-ops (with a warning) if
the claude CLI isn't on PATH when the container is created — add a
claude-code feature as well. installsAfter handles ordering for either
sourecode/ or anthropics/ claude-code.
| Option | Type | Default | Purpose |
|---|---|---|---|
version |
string | 0.40.4 |
nvm release tag to install (without the leading v). |
node |
string | lts |
Node version to install via nvm. lts uses nvm install --lts. none skips node install. Anything else is passed as-is to nvm install. |
| Option | Type | Default | Purpose |
|---|---|---|---|
paths |
string | "" |
Comma-separated list of $HOME-relative paths to persist (e.g. .claude,.claude.json,.gitconfig). Written to /etc/devcontainer-persist.d/user.json at build time. Leave empty if you only want features to contribute paths. |
Requires a bind mount from a persistent source to /mnt/home-persist in
devcontainer.json. See docs/persistence.md for
the full model.
Claude login (~/.claude/.credentials.json) and chat history (projects/,
sessions/, session-env/) live in ~/.claude. The claude-code feature
declares .claude and .claude.json in its manifest, so installing
home-persist alongside it — plus bind-mounting a persistent source to
/mnt/home-persist — is enough to carry state across rebuilds. See
docs/persistence.md for the full model.
Dockerfile.workspace builds a Coder workspace image and main.tf is the
Coder template that launches it. The workspace container runs its own
dockerd under the sysbox runtime
(runtime = "sysbox-runc" in main.tf), so @devcontainers/cli inside the
workspace talks to a local daemon and bind-mount paths resolve against the
same filesystem the daemon sees.
host docker daemon
└── workspace container (sourecode/coder-workspace:latest, runtime=sysbox-runc)
├── systemd (PID 1)
├── dockerd
├── coder-agent.service (main agent)
└── @devcontainers/cli up
└── devcontainer container(s) (compose, features, lifecycle all work)
└── coder sub-agent (runs code-server / jetbrains here)
Editors run inside the devcontainer via Coder's Dev Containers integration
(coder_devcontainer.subagent_id). When you click "Open in code-server" in the
Coder UI, you land in the devcontainer's filesystem with its tools, not the
outer workspace.
One extra container over plain Coder workspaces. No DooD path translation.
Dockerfile.workspace— builds the workspace image. Ubuntu + systemd + docker-ce + Node LTS +@devcontainers/cli+ acoderuser at UID 1000.main.tf— Coder template. Launches the workspace container underruntime = "sysbox-runc", injectsCODER_AGENT_TOKENvia env, and uploads the agent init script to/etc/coder/agent-init.sh. Thecoder-agent.servicesystemd unit (baked into the image, seeDockerfile.workspace) runs that script on boot.
- Linux kernel >= 5.12 (>= 6.3 ideal, avoids shiftfs entirely)
- Native Docker (not the snap) at
/usr/bin/docker - Sysbox installed (see below)
- Your existing Coder server (this template was developed against a docker-compose-deployed Coder)
Zero-container-deletion install, tolerates a single dockerd restart.
# 1. pre-populate /etc/docker/daemon.json so sysbox's post-install step
# doesn't need to touch the network config itself
sudo tee /etc/docker/daemon.json >/dev/null <<'JSON'
{
"bip": "172.24.0.1/16",
"default-address-pools": [
{ "base": "172.31.0.0/16", "size": 24 }
]
}
JSON
# Pick CIDRs free of your existing networks:
# docker network inspect $(docker network ls -q) | grep -i subnet
# 2. one controlled restart so dockerd loads the keys
sudo systemctl restart docker
# 3. install sysbox (Ubuntu/Debian amd64)
wget https://downloads.nestybox.com/sysbox/releases/v0.7.0/sysbox-ce_0.7.0-0.linux_amd64.deb
sudo apt-get install -y jq fuse3 ./sysbox-ce_0.7.0-0.linux_amd64.deb
# 4. verify
docker info | grep -i runtime # should list sysbox-runc
systemctl status sysbox --no-pagerSmoke test that nested Docker works under sysbox:
CID=$(docker run -d --rm --runtime=sysbox-runc nestybox/ubuntu-noble-systemd-docker)
sleep 15
docker exec "$CID" docker run --rm hello-world # should print the hello-world greeting
docker stop "$CID"docker build -f Dockerfile.workspace -t sourecode/coder-workspace:latest .The image must exist in the host's local image store (or in a registry the
host can pull from). It is referenced by var.workspace_image in main.tf.
If your Coder runs inside a docker-compose stack and you prefer not to install
coder on the host, use the CLI that's baked into the Coder server image:
# copy the template files into the Coder container
docker exec coder-coder-1 mkdir -p /tmp/tpl
docker cp ./main.tf coder-coder-1:/tmp/tpl/main.tf
docker cp ./Dockerfile.workspace coder-coder-1:/tmp/tpl/Dockerfile.workspace
# login once
docker exec -it coder-coder-1 /opt/coder login http://localhost:7080
# push
docker exec -it coder-coder-1 /opt/coder templates push coder-template -d /tmp/tpl --yesOr install the coder CLI locally and push from the repo dir directly.
If you push a newer version of the template but workspaces still launch from
an old image, check Templates → Settings → Variables in the Coder UI.
A persisted override for workspace_image wins over the default in main.tf
across version bumps. Clear it or set it to sourecode/coder-workspace:latest.
A workspace pinned to an older template version does not auto-upgrade. After pushing a new version, either:
- Click Update on the workspace in the UI, or
coder update <workspace-name>(from wherever you have the CLI)
-
"Agent is taking longer than expected to connect" — the workspace container exited instead of running systemd. Check:
CID=$(docker ps -a --filter "name=coder-" -q | head -1) docker inspect "$CID" --format '{{.HostConfig.Runtime}} {{.Config.Image}} {{.State.Status}}' docker logs "$CID" | tail -50
Runtime must be
sysbox-runc(hardcoded inmain.tf); image should match whatevervar.workspace_imageresolves to (defaultsourecode/coder-workspace:latest). If either is wrong, fix the template / variable override and recreate. -
Agent up but nothing connects — inspect systemd and the agent unit:
docker exec "$CID" systemctl is-system-running docker exec "$CID" systemctl status docker coder-agent --no-pager docker exec "$CID" journalctl -u coder-agent --no-pager -n 100 docker exec "$CID" ls -la /etc/coder/ # expect agent-init.sh present and executable docker exec "$CID" bash -lc "tr '\0' '\n' < /proc/1/environ | grep CODER_AGENT_TOKEN"
-
Inner
@devcontainers/cli uphangs / errors — run it by hand inside the workspace as thecoderuser to see the real output:docker exec -u coder -it "$CID" bash -lc "cd ~/<repo> && devcontainer up --workspace-folder ."
@devcontainers/cli assumes the CLI and the Docker daemon share a filesystem.
When Coder launches a workspace container and mounts the host socket in, the
CLI (inside the workspace) and the daemon (on the host) see different
filesystems — every bind-mount path the CLI emits is wrong from the daemon's
point of view. That's the root cause of the
bind source path does not exist failures.
The principled fixes are: path alignment (fragile), DinD (insecure),
sysbox (safe DinD), envbuilder (collapse into one container), or CI pre-build
(move work out of runtime). This template uses sysbox so the workspace's own
dockerd runs safely inside it, paths resolve naturally, and compose-based
devcontainers work unchanged.
.github/workflows/
publish-features.yml # publishes every src/<id>/ to GHCR
src/
claude-code/
devcontainer-feature.json
install.sh
context-mode/
devcontainer-feature.json
install.sh
home-persist/
devcontainer-feature.json
install.sh
resolve.sh
nvm/
devcontainer-feature.json
install.sh
rtk/
devcontainer-feature.json
install.sh
docs/
migration-guide.md
persistence.md
Dockerfile.workspace # Coder workspace image (Ubuntu + systemd + dockerd + @devcontainers/cli)
main.tf # Coder template that launches the workspace under sysbox-runc
Each feature directory follows the Dev Container Features spec:
a devcontainer-feature.json with metadata and options, plus an install.sh
that runs as root inside the container during the build.
-
install.shstarts asroot. Prefer system-wide install paths (/usr/local/bin,/usr/local/share/<id>,/etc/profile.d) over anything under the remote user's home. Paths under$HOMEthat the feature needs to persist across rebuilds should be declared via ahome-persistmanifest (see below), not written at build time — the symlinks don't exist yet duringinstall.sh. -
If a tool's upstream installer insists on writing to
$HOME, relocate the resulting binary to/usr/local/bin(seesrc/claude-code/install.sh). If the tool supports an override env var (e.g.RTK_INSTALL_DIR), pass it directly. -
For anything that genuinely needs to live in the user's real home (credentials, plugin state, shell-rc tweaks), emit a script to
/usr/local/share/<id>/post-create.shand wire it viapostCreateCommandindevcontainer-feature.jsonso it runs after thehome-persistresolver has symlinked the target paths into place. -
If your feature writes persistent state under
$HOME, declare those paths by dropping a JSON manifest ininstall.sh:mkdir -p /etc/devcontainer-persist.d cat > /etc/devcontainer-persist.d/<your-feature>.json <<'EOF' { "source": "<your-feature>", "paths": [".your-tool"] } EOF
The
home-persistfeature'sonCreateCommandpicks it up and symlinks each path into the persistence volume. Seedocs/persistence.md. -
Feature options are exposed as uppercased environment variables (e.g. option
autoPatchClaude→$AUTOPATCHCLAUDE). Always apply a default:"${FOO:-true}". -
The working directory when
install.shruns is the extracted feature folder, so sibling files are accessible via"$(dirname "$0")/...". -
Don't assume the base image has any particular tools — install
curl,ca-certificates, etc. fromapt-getif absent. Keep installs idempotent where reasonable. -
Use
installsAfterfor soft ordering (e.g.rtklists both thesourecodeandanthropicsclaude-code IDs), anddependsOnfor hard requirements that should auto-pull another feature (e.g.claude-code→nvm).
Using the @devcontainers/cli:
npm i -g @devcontainers/cli
# Test a feature in isolation against a chosen base image:
devcontainer features test \
--features rtk \
--base-image debian:trixie-slim \
.mkdir -p src/<id>- Write
src/<id>/devcontainer-feature.jsonwithid,version,name,description,options, and optionalinstallsAfter. - Write
src/<id>/install.sh(make it executable:chmod +x install.sh). - Bump the
versionfor every change (MAJOR.MINOR.PATCH). The publish workflow pushes each declared version plus rollingMAJOR,MAJOR.MINOR, andlatesttags. - Commit to
master— thePublish Featuresworkflow runs on push and pushes toghcr.io/sourecode/devcontainer-features/<id>:<tags>.
The Publish Features workflow (.github/workflows/publish-features.yml)
uses the official devcontainers/action@v1 with publish-features: true
and targets GHCR:
oci-registry: ghcr.iofeatures-namespace: sourecode/devcontainer-features
GHCR supports nested paths natively, so each feature publishes to
ghcr.io/sourecode/devcontainer-features/<feature-id>, and the collection
metadata publishes cleanly at the namespace root — unlike Docker Hub, which
rejects artifacts at the namespace level.
Authentication uses the built-in GITHUB_TOKEN (with packages: write), so
there are no additional repository secrets to manage. The packages published
this way attach to the repo on GitHub Packages; to make them public, set the
package visibility to Public in the package settings.
The workflow triggers on pushes to master that touch src/** or the workflow
file, and can also be run manually via Run workflow in the Actions tab.
Update version in devcontainer-feature.json. Each push that lands in master
publishes:
ghcr.io/sourecode/devcontainer-features/<id>:<MAJOR>.<MINOR>.<PATCH>ghcr.io/sourecode/devcontainer-features/<id>:<MAJOR>.<MINOR>ghcr.io/sourecode/devcontainer-features/<id>:<MAJOR>ghcr.io/sourecode/devcontainer-features/<id>:latest
MIT — see LICENSE.
{ "image": "debian:trixie-slim", "features": { "ghcr.io/sourecode/devcontainer-features/claude-code:2": {}, "ghcr.io/sourecode/devcontainer-features/rtk:2": { "autoPatchClaude": true }, "ghcr.io/sourecode/devcontainer-features/context-mode:2": {} } }