Problem
The supabase_edge_runtime_<project> container is started by the CLI with the Docker default nofile ulimit (1024 soft). For projects with many Edge Functions (200+) and long-running local dev sessions, the per-isolate fd usage in edge-runtime (Deno) accumulates over hours of uptime until new isolates fail to boot with:
worker boot error: failed to bootstrap runtime: Reading /root/.cache/deno/npm/registry.npmjs.org/@types/node/22.5.4/wasi.d.ts: Too many open files (os error 24)
wall clock duration warning: isolate: <id>
early termination has been triggered: isolate: <id>
User-visible symptom: alphabetically late EFs (e.g. settings-*, settlement-*, smtp-*, tenants-*) start returning HTTP 503 BOOT-FEHLER while earlier-alphabet EFs work fine. After supabase stop && supabase start the issue resets.
Reproduction
- Project with ~200 Edge Functions (
supabase/functions/<name>/index.ts)
supabase start
- Let it run several hours; periodically curl ~all functions (e.g. via a smoke script)
- Eventually new functions return 503;
docker logs supabase_edge_runtime_<project> shows "Too many open files (os error 24)"
Verified instrumentation:
$ docker exec supabase_edge_runtime_<project> sh -c 'ulimit -n'
1024
$ PID=$(docker inspect -f '{{.State.Pid}}' <edge_runtime_container>) ; \
sudo ls /proc/$(pgrep -P $PID)/fd | wc -l
676 # after ~30 min idle, climbing under load
The host itself has nofile=524288 available; only the container is constrained:
$ docker inspect -f '{{json .HostConfig.Ulimits}}' supabase_edge_runtime_<project>
null
Expected behavior
Either:
- The CLI accepts a flag like
--ulimit-nofile=<soft>:<hard> (e.g. on supabase init config or as a [edge_runtime] section in supabase/config.toml) that is forwarded to docker run --ulimit nofile=....
- Or the CLI sets a more generous default (e.g.
nofile=65536:65536) given that local dev with many functions is the primary use case.
Workaround currently in use
supabase stop && supabase start resets the fd pool. Naïve docker run --ulimit ... recreate of the container with the same image+entrypoint+env+volumes from docker inspect does not work — the recreated container exits immediately with main worker has been destroyed, presumably because the CLI generates additional internal state (JWKS, SUPABASE_INTERNAL_FUNCTIONS_CONFIG, project-specific volumes) that isn't fully observable in docker inspect. So a userland workaround that bypasses the CLI is hard.
Environment
- Supabase CLI: 2.84.2 (also affects 2.95.4 per release notes — no related fix)
- Edge Runtime image:
public.ecr.aws/supabase/edge-runtime:v1.73.0
- Docker: 29.3.0
- OS: Debian 13 (Linux 6.12)
- ~204 Edge Functions in the project
Suggested fix
Add a config option (config.toml or CLI flag) to forward --ulimit nofile=N:M when the CLI starts the edge_runtime container. Minimal patch surface; users who don't set it keep current behavior.
Happy to send a PR if there's appetite for it.
Problem
The
supabase_edge_runtime_<project>container is started by the CLI with the Docker defaultnofileulimit (1024 soft). For projects with many Edge Functions (200+) and long-running local dev sessions, the per-isolate fd usage inedge-runtime(Deno) accumulates over hours of uptime until new isolates fail to boot with:User-visible symptom: alphabetically late EFs (e.g.
settings-*,settlement-*,smtp-*,tenants-*) start returning HTTP 503 BOOT-FEHLER while earlier-alphabet EFs work fine. Aftersupabase stop && supabase startthe issue resets.Reproduction
supabase/functions/<name>/index.ts)supabase startdocker logs supabase_edge_runtime_<project>shows "Too many open files (os error 24)"Verified instrumentation:
The host itself has
nofile=524288available; only the container is constrained:Expected behavior
Either:
--ulimit-nofile=<soft>:<hard>(e.g. onsupabase initconfig or as a[edge_runtime]section insupabase/config.toml) that is forwarded todocker run --ulimit nofile=....nofile=65536:65536) given that local dev with many functions is the primary use case.Workaround currently in use
supabase stop && supabase startresets the fd pool. Naïvedocker run --ulimit ...recreate of the container with the same image+entrypoint+env+volumes fromdocker inspectdoes not work — the recreated container exits immediately withmain worker has been destroyed, presumably because the CLI generates additional internal state (JWKS,SUPABASE_INTERNAL_FUNCTIONS_CONFIG, project-specific volumes) that isn't fully observable indocker inspect. So a userland workaround that bypasses the CLI is hard.Environment
public.ecr.aws/supabase/edge-runtime:v1.73.0Suggested fix
Add a config option (config.toml or CLI flag) to forward
--ulimit nofile=N:Mwhen the CLI starts the edge_runtime container. Minimal patch surface; users who don't set it keep current behavior.Happy to send a PR if there's appetite for it.