Remote ephemeral sandboxed shell environment on Cloud Run.
Connects to a Cloud Run service over gRPC and gives you an interactive bash session with a PTY, or runs a one-shot command.
Each connection gets a fresh container instance. The shell prompt shows remaining request time so you know when Cloud Run will kill the connection.
The environment can be customized by editing the base.yaml
file to install other packages.
Requires ko, apko, and gcloud.
./deploy.sh
This builds a base image with basic tools using apko, then builds the server with ko on top of it, and deploys to Cloud Run.
Set PROJECT and REGION to override defaults.
go install github.com/imjasonh/crush/cmd/crush@latest
Set CRUSH_SERVICE to your Cloud Run service's host, or pass --service:
# Interactive shell
crush --service crush-xxxxx-ue.a.run.app
# One-shot command
crush --service crush-xxxxx-ue.a.run.app -- ls -la
Passing -e FOO=bar will set the environment variable FOO to bar in the remote session.
crush reads configuration from ./crush.toml in the current directory,
falling back to ~/.config/crush/config.toml. See
crush.toml for a full example.
Restrict outbound network access from the remote container to a list of hosts, IPs, or CIDRs:
[network]
allow = [
"registry.npmjs.org",
"github.com",
"10.0.0.0/8",
]If allow is absent, the container has full network access (default).
If allow is present but empty (allow = []), all outbound is blocked.
Hostnames are resolved to IPs at session start. DNS and the gRPC control
connection are always permitted.
Set explicit env vars or pass through values from the local environment:
[env]
passthrough = ["LANG", "GITHUB_TOKEN"]
[env.set]
EDITOR = "vim"passthrough inherits the named variable from the caller's environment
(skipped if unset). [env.set] provides explicit key=value pairs.
CLI -e flags take precedence over both.
-v /local/path:/remote/path uploads local files to the remote container
and keeps them in sync bidirectionally during the session. Multiple -v
flags can be passed.
# Mount a directory
crush -v ./src:/tmp/src -- make -C /tmp/src
# Mount a single file
crush -v ./config.yaml:/tmp/config.yaml -- cat /tmp/config.yaml
# Interactive session with a mount
crush -v ./project:/tmp/project
How sync works:
- On connect, the local path is tar'd and uploaded to the remote path.
- The command starts only after all uploads finish.
- During the session, both sides poll every 500ms for changes:
- Remote→local: The server detects modified or deleted files (by comparing mtimes to a snapshot) and streams them back.
- Local→remote: The client does the same and streams updates to the server.
- On command exit, the server does one final sync before sending the exit status.
Conflict resolution: There is none. If the same file is modified on both sides within the same 500ms window, one side's write wins nondeterministically (last-write-wins). For interactive use this is rarely an issue, but be aware of it for automated workflows.
Limitations and data loss risks:
- Sync is poll-based (500ms). Changes made less than 500ms before the remote command exits may not be synced back. The final sync mitigates this for server→client, but client→server changes during the last 500ms of a session can be lost.
- Concurrent edits to the same file from both sides will silently overwrite. There is no merge or conflict detection.
- Files written to the remote count toward the Cloud Run instance's memory limit (the filesystem is tmpfs). Large mounts can cause OOM.
- Symlinks are not preserved; only regular files and directories are synced.
- Sync is based on mtime comparison. If a file is modified without updating its mtime (rare, but possible), the change will not be detected.
The intended use of this tool is to mainly write files in the sandbox and have them synced back locally, where most of these limitations and risk are less severe.
The server exposes a bidirectional gRPC streaming RPC (ExecService.Exec).
The client sends a StartRequest with the command and optional PTY size,
then streams stdin/signals/resize events. The server streams back
stdout/stderr and an exit status.
Cloud Run environments have no persistent storage, and the disk is ephemeral, backed by an in-memory filesystem. File writes count toward the memory limit. When the instance OOMs, the connection is closed immediately.
Instances have 4 CPUs and 4Gi memory, but it can be changed
by setting the MEMORY and CPU environment variables when running
deploy.sh.
Each server instance handles exactly one Exec call, then gracefully stops so Cloud Run replaces it on the next request.
Requests require GCP authorization, e.g., using gcloud auth login --update-adc.
Users can be granted access via IAM roles -- they'll need Cloud Run Invoker
The default timeout is 5 minutes, but it can be increased by setting the
TIMEOUT environment variable when running deploy.sh. The max timeout
is 1 hour.
Cloud Run pricing for the service with minimal resources is roughly $0.0015 per minute, or $0.09 per hour, billed per second. The service costs nothing while it's not running. Cold start time is consistently <1 second. Connecting with mounted volumes may take longer if there's a lot of data to upload.
- Log sessions to GCP Logging or somewhere
- Run in multiple regions with GCLB so you always get a nearby instance
- Persisted sessions to Cloud Storage or another storage service
- Run as a less-permissioned GCP SA
- Use fsnotify for event-driven sync instead of polling (lower latency, less CPU)
