Lightweight context pool with per-context resource limits#1
Merged
Conversation
Each ContextPool thread runs a single JSRuntime that hosts many JSContext instances. Unlike full Runtime (1 OS thread + ~2MB each), contexts are ~50KB — ideal for per-connection state in LiveView. New modules: - QuickBEAM.ContextPool — manages a runtime thread with many contexts - QuickBEAM.Context — lightweight GenServer owning a single JS context Zig layer: - context_types.zig — pool message types and queue - context_worker.zig — worker thread managing multiple contexts - Pool drain callback in WorkerState for await_promise integration Refactoring: - Extract compiled JS polyfills to shared QuickBEAM.JS module - Expose handler maps from Runtime for Context reuse
…PI support - Route sync call resolution through per-context RuntimeData sync_slots via rd_map on PoolData (accessible from NIF for direct slot signaling) - ContextPool now spawns N runtime threads (default: schedulers_online) with round-robin context distribution - Context installs browser/node/beam polyfills during init - Add tests for browser APIs (URL, crypto, performance, setTimeout) and multi-thread distribution
- pool_dom_find, pool_dom_find_all, pool_dom_text, pool_dom_html NIFs - Context.dom_find/2, dom_find_all/2, dom_text/2, dom_html/1 - Test DOM rendering + querying through context
1. Scale: 1000 contexts, 500 create/destroy cycles, sibling destruction 2. Concurrency: 200-task thundering herd, 100 concurrent Beam.call, callSync serialization on single thread 3. Memory: create/destroy leak check, 3s rolling churn, 2000 eval cycles 4. Isolation: 100-context global cross-contamination, prototype pollution, built-in tampering 5. Error recovery: sibling resilience, 100 sequential errors, timeout non-blocking, OOM containment 6. Handler contention: slow handler + fast contexts, error isolation, 50 concurrent Beam.call handlers 7. Messaging: 1000 messages across 50 contexts with delivery verification, messages during Beam.call, 500-message burst
- Add get_global NIF: uses JS_GetPropertyStr instead of eval("globalThis[...]")
- Add snapshot_globals NIF: Zig-side StringHashMap replaces JS __qb_builtins snapshot
- Add list_globals NIF: JS_GetOwnPropertyNames replaces eval("Object.getOwnPropertyNames")
- Add delete_globals NIF: JS_DeleteProperty replaces try/finally/delete JS wrapper
- Fix callSync blocking: drain pool queue during sync wait (1ms intervals)
- Fix NIF ref returns: use enif_make_copy instead of beam.make (which returns raw int)
- Fix list_globals segfault: use raw enif_* calls on worker threads (beam.make_empty_list
with .source option crashes on non-NIF threads)
- Rewrite worker protocol to use integer IDs and Beam.call/callSync handlers
- Worker spawn is async via Task to avoid GenServer deadlock
- Workers on context pool spawn as lightweight contexts on the same pool
- Remove snapshot_builtins_js from QuickBEAM.JS (no longer needed)
- Add pool-side get_global NIF for context pool path
All 281 core+worker tests pass. Pre-existing BundlerTest and LocksTest failures unchanged.
- Restore all OXC delegations removed during js.ex refactor: parse, transform, minify, bundle, bundle_file, walk, collect - Add newly available OXC delegations: imports, imports!, postwalk/2, postwalk/3, patch_string - Fix LocksTest: remove start_supervised!(LockManager) since it's already started by the application supervisor All 1195 tests pass (0 failures).
- Context pool compiles polyfills to QuickJS bytecode once (cached in persistent_term) and loads via JS_ReadObject instead of parsing source per context. Context creation is 3.2x faster (~5ms vs ~17ms). - Add pool_load_bytecode and pool_memory_usage NIFs for context pool. - Add Context.memory_usage/1 to expose per-context JS memory stats. - Fix README and architecture docs: update memory numbers from measurements (55 KB bare, 65 KB beam, 375 KB full per context; 530 KB + 2.5 MB thread stack per runtime). - Remove unnecessary terminate callback from LiveView example — start_link already links context to the calling process. All 1195 tests pass. Test suite runs 45% faster (27s vs 50s).
Instead of all-or-nothing :browser, contexts can now request individual API groups: :fetch, :websocket, :worker, :channel, :eventsource, :url, :crypto, :compression, :buffer, :dom, :console, :storage, :locks. Each group auto-includes its dependencies (core events loaded when any group needs EventTarget/AbortController, process.ts loaded when Worker/WebSocket/EventSource need dispatchers). Memory per context: bare 58 KB beam 71 KB beam + url 108 KB beam + fetch 231 KB beam + fetch + ws 303 KB browser (all) 429 KB
Patch quickjs.c to track malloc_size and enforce malloc_limit per JSContext. All context-level allocation functions (js_malloc, js_calloc, js_mallocz, js_realloc, js_realloc2, js_free) now increment/decrement ctx->malloc_size alongside the runtime total. New C APIs: JS_SetContextMemoryLimit(ctx, limit) - set per-context limit (0=unlimited) JS_GetContextMallocSize(ctx) - get current context allocation Elixir API: Context.start_link(pool: p, memory_limit: 512_000) - limit in bytes Context.memory_usage/1 now returns :context_malloc_size Per-context sizes (individual, not cumulative): bare 73 KB beam 92 KB beam + url 140 KB beam + fetch 313 KB browser 594 KB
Patch quickjs.c to add per-context reduction counting. Each interrupt check (~10K opcodes) increments ctx->reduction_count. When it exceeds ctx->reduction_limit, an uncatchable error is thrown, stopping runaway loops. The count resets before each eval/call, so the limit applies per-operation, not cumulatively. New C APIs: JS_SetContextReductionLimit(ctx, limit) JS_GetContextReductionCount(ctx) JS_ResetContextReductionCount(ctx) Elixir API: Context.start_link(pool: p, max_reductions: 100_000) After hitting the limit, the context remains usable — subsequent evals work normally with a fresh reduction budget.
…tests - Use monotonic next_worker_id counter instead of map_size (which can produce duplicate IDs after worker termination) - Use Process.exit(:shutdown) instead of blocking Context.stop in terminate callback - Add tests for memory_limit, max_reductions, and context_malloc_size - Fix moduledoc memory numbers, long line, trailing blank line
dannote
added a commit
that referenced
this pull request
Apr 22, 2026
…s.lookup fprof showed Shapes.get_shape (54K calls, 108ms) as the #1 bottleneck. It was still called from Put.put via Shapes.lookup on the hot path. Replace Shapes.lookup(shape_id, key) with Map.fetch(offsets, key) in shape_put, Store.put_obj_key, and put/3 for length. Get.get: 648ns → 406ns (37% faster) Preact render: 6.95ms → 6.55ms (5.8% faster)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds
QuickBEAM.ContextPoolandQuickBEAM.Context— a multi-context-per-runtimearchitecture where many JS contexts share a small number of OS threads,
instead of one thread per runtime.
Designed for per-connection JS state in Phoenix LiveView at scale:
10K contexts → ~4.2 GB + 4 threads, vs ~30 GB + 10K threads.
New modules
QuickBEAM.ContextPool— pool of N runtime threads (default:System.schedulers_online())with round-robin context distribution.
QuickBEAM.Context— lightweight GenServer owning a singleJSContext. Full API surface:eval, call, Beam.call/callSync, DOM, messaging, handlers, supervision.
Granular API groups
Contexts can load individual API groups instead of the full browser bundle:
Available:
:fetch,:websocket,:worker,:channel,:eventsource,:url,:crypto,:compression,:buffer,:dom,:console,:storage,:locks. Dependencies auto-resolve.QuickJS patches
Per-context memory tracking —
js_malloc/js_free/js_realloctrack
ctx->malloc_size. Whenctx->malloc_limit > 0, allocationsexceeding the limit trigger OOM.
Per-context reduction limits — every ~10K opcodes,
ctx->reduction_countincrements. When it exceeds
ctx->reduction_limit, the current eval isinterrupted. The count resets per-operation; the context stays usable.
Precompiled bytecodes
Polyfill JS is compiled to QuickJS bytecodes once and cached in
persistent_term. Context creation loads bytecodes viaJS_EvalFunction— ~3.2x faster than parsing JS text.
NIF operations replacing JS eval
get_global,list_globals,snapshot_globals,delete_globalsuseJS_GetPropertyStr/JS_GetOwnPropertyNames/JS_DeletePropertydirectly instead of eval round-trips.
Zig layer
context_types.zig— pool data structures, lock-free queue, per-context RuntimeDatacontext_worker.zig— worker thread managing a HashMap of contexts with message pumpquickbeam.zigTests
35 new tests (13 functional + 22 stress): 1K-context scale, 200-task
thundering herd, memory leak checks, isolation, error recovery, handler
contention, burst messaging.
All 1195 tests pass (7 doctests + 1188 tests).