Make fetch asynchronous with fetch manager#396
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughReworks fetch into an asynchronous model: Changes
Sequence DiagramsequenceDiagram
participant JS as JavaScript
participant Engine as GocciaEngine
participant FM as FetchManager
participant Worker as FetchWorker
participant MTQ as MicrotaskQueue
participant Promise as Promise
JS->>FM: StartFetch(url, method, headers, promise)
FM->>Worker: spawn worker thread (off-thread HTTP)
FM->>Promise: root promise (GC protect)
Note over Worker: perform HTTP/HEAD/GET synchronously on worker
JS->>Engine: await fetch(...)
Engine->>MTQ: drain/pump microtasks as needed
MTQ-->>Engine: microtasks processed
Worker->>FM: post completion (response or error)
Engine->>FM: PumpCompletions()
FM->>Promise: settle (resolve/reject) on owning thread
FM->>MTQ: drain microtasks (promise reactions)
MTQ-->>FM: reactions executed
FM->>Promise: unroot promise
FM-->>Engine: completion processed
Engine-->>JS: await resumes with result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
Benchmark Results386 benchmarks Interpreted: 🟢 25 improved · 🔴 95 regressed · 266 unchanged · avg -0.8% arraybuffer.js — Interp: 🔴 2, 12 unch. · avg -0.8% · Bytecode: 🟢 4, 🔴 1, 9 unch. · avg +0.4%
arrays.js — Interp: 🟢 1, 🔴 4, 14 unch. · avg -1.1% · Bytecode: 🟢 3, 16 unch. · avg +1.9%
async-await.js — Interp: 🟢 1, 5 unch. · avg +0.2% · Bytecode: 6 unch. · avg +1.2%
base64.js — Interp: 🔴 3, 7 unch. · avg -2.5% · Bytecode: 🔴 1, 9 unch. · avg -0.7%
classes.js — Interp: 🔴 15, 16 unch. · avg -3.4% · Bytecode: 🟢 1, 30 unch. · avg +1.9%
closures.js — Interp: 🔴 4, 7 unch. · avg -2.6% · Bytecode: 🟢 3, 8 unch. · avg +2.4%
collections.js — Interp: 🟢 3, 🔴 1, 8 unch. · avg +0.4% · Bytecode: 🟢 2, 10 unch. · avg +2.8%
csv.js — Interp: 🔴 3, 10 unch. · avg -1.2% · Bytecode: 🟢 4, 🔴 1, 8 unch. · avg +1.2%
destructuring.js — Interp: 🔴 5, 17 unch. · avg -2.4% · Bytecode: 🟢 5, 🔴 1, 16 unch. · avg +1.8%
fibonacci.js — Interp: 🔴 2, 6 unch. · avg -1.5% · Bytecode: 🟢 2, 🔴 1, 5 unch. · avg +2.7%
float16array.js — Interp: 🟢 2, 🔴 2, 28 unch. · avg -0.0% · Bytecode: 🟢 11, 🔴 1, 20 unch. · avg +3.2%
for-of.js — Interp: 🔴 2, 5 unch. · avg -2.6% · Bytecode: 🟢 1, 6 unch. · avg +1.9%
helpers/bench-module.js — Interp: 0 · Bytecode: 0
iterators.js — Interp: 🟢 1, 🔴 20, 21 unch. · avg -1.9% · Bytecode: 🟢 2, 🔴 22, 18 unch. · avg -3.5%
json.js — Interp: 🔴 3, 17 unch. · avg -1.5% · Bytecode: 🟢 16, 4 unch. · avg +6.9%
jsx.jsx — Interp: 🟢 1, 🔴 4, 16 unch. · avg -1.3% · Bytecode: 🟢 9, 🔴 1, 11 unch. · avg +3.3%
modules.js — Interp: 9 unch. · avg -1.1% · Bytecode: 🟢 1, 🔴 1, 7 unch. · avg +1.2%
numbers.js — Interp: 🔴 3, 8 unch. · avg -1.8% · Bytecode: 🟢 5, 🔴 1, 5 unch. · avg +1.2%
objects.js — Interp: 7 unch. · avg -0.2% · Bytecode: 🟢 7 · avg +10.5%
promises.js — Interp: 🔴 6, 6 unch. · avg -2.9% · Bytecode: 🟢 3, 9 unch. · avg +2.8%
regexp.js — Interp: 🟢 6, 5 unch. · avg +2.0% · Bytecode: 🟢 10, 1 unch. · avg +6.2%
strings.js — Interp: 🔴 1, 18 unch. · avg +2.1% · Bytecode: 🟢 6, 🔴 1, 12 unch. · avg +1.8%
tsv.js — Interp: 🟢 5, 🔴 1, 3 unch. · avg +2.6% · Bytecode: 🟢 5, 4 unch. · avg +3.2%
typed-arrays.js — Interp: 🟢 2, 🔴 12, 8 unch. · avg -2.1% · Bytecode: 🟢 4, 🔴 13, 5 unch. · avg -2.0%
uint8array-encoding.js — Interp: 🟢 3, 🔴 2, 13 unch. · avg +7.2% · Bytecode: 🟢 10, 🔴 2, 6 unch. · avg +11.2%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
Suite Timing
Measured on ubuntu-latest x64. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/embedding.md`:
- Line 736: The docs mention a nonexistent API symbol ExecuteWithTiming in the
TGocciaEngine microtask/GC description; remove the reference to
ExecuteWithTiming and update the sentence to state that microtasks are drained
automatically after the engine's actual entry points (TGocciaEngine.Execute and
TGocciaEngine.ExecuteProgram) so embedders are not directed to a non‑existent
function.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: aebcae81-95cb-441a-8f5e-62458f013519
📒 Files selected for processing (13)
docs/built-ins.mddocs/decision-log.mddocs/embedding.mddocs/interpreter.mddocs/testing-api.mdsource/app/GocciaREPL.dprsource/units/Goccia.Builtins.Benchmark.passource/units/Goccia.Builtins.GlobalFetch.passource/units/Goccia.Builtins.TestingLibrary.passource/units/Goccia.Engine.passource/units/Goccia.Evaluator.passource/units/Goccia.FetchManager.pastests/built-ins/fetch/fetch-async.js
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@source/units/Goccia.FetchManager.pas`:
- Around line 47-49: The current implementation spawns a new TGocciaFetchWorker
for every call to fetch(), each reserving FETCH_WORKER_STACK_SIZE (8MB) which
can exhaust threads/VM; modify fetch() / the component that creates
TGocciaFetchWorker so it enforces a per-runtime cap on concurrent workers: add a
bounded semaphore/counter (e.g., MaxFetchWorkers) that must be acquired before
creating a TGocciaFetchWorker and released when the worker finishes, and if the
cap is reached either queue the request behind the semaphore or return a
fast-fail error to the caller (make this behavior configurable); ensure the
semaphore is referenced where TGocciaFetchWorker instances are created and
released on worker termination to prevent leaks.
- Around line 452-464: TGocciaFetchManagerImpl.DiscardPending currently removes
temp roots and clears FPending but leaves FState able to accept late worker
completions; update DiscardPending to isolate the old state so late completions
cannot leak into the next run by rotating/abandoning the state: capture the
current FState into a local variable, replace FState with a fresh/disabled state
(or flip an "abandoned" flag on the old state) so subsequent worker completions
are rejected or ignored, then remove temp roots and clear the old state's
completions (using the captured local) to fully drain/abandon it; ensure methods
that enqueue completions check the state's identity/abandoned flag so late
responses are discarded instead of being applied to the new live FState.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c8f26bc4-ff90-4f34-a524-75e83609bc1d
📒 Files selected for processing (3)
docs/embedding.mddocs/interpreter.mdsource/units/Goccia.FetchManager.pas
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/embedding.md
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/built-ins.md (1)
753-753: Accurate documentation of the new async fetch behavior.The technical content correctly describes all key aspects of the new fetch implementation: background workers, Promise settlement on the runtime thread, synchronous await via completion pumping, the 16-worker cap, and TypeError rejection when capacity is exceeded.
Optional: Consider breaking up the long sentence for readability
The current sentence packs five distinct concepts into one statement. For improved scannability, consider restructuring as a bulleted list:
-**GocciaScript differences:** Only `GET` and `HEAD` methods. No `Request` object, `AbortSignal`, streaming body, or CORS. Requests run on fetch-specific background workers and settle their Promise on the owning runtime thread; `await fetch(...)` still synchronously waits by pumping fetch completions. Each runtime caps active fetch workers at 16; additional calls reject their returned Promise with `TypeError` until a worker finishes. Requires `--allowed-host` configuration. +**GocciaScript differences:** + +- Only `GET` and `HEAD` methods supported +- No `Request` object, `AbortSignal`, streaming body, or CORS +- Requests run on fetch-specific background workers and settle their Promise on the owning runtime thread +- `await fetch(...)` still synchronously waits by pumping fetch completions +- Each runtime caps active fetch workers at 16; additional calls reject their returned Promise with `TypeError` until a worker finishes +- Requires `--allowed-host` configuration🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/built-ins.md` at line 753, The long single sentence under the "GocciaScript differences" line is packing multiple concepts; split it into a short introductory sentence and a bulleted list that separately documents: HTTP method support (only GET and HEAD), absent features (no Request object, AbortSignal, streaming body, or CORS), runtime behavior (fetch runs on fetch-specific background workers and Promises settle on the owning runtime thread with await pumping completions), the concurrency cap (runtime caps active fetch workers at 16 and additional calls reject with TypeError), and the configuration requirement (requires --allowed-host); update the paragraph around the "GocciaScript differences:" header accordingly to improve scannability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@source/units/Goccia.FetchManager.pas`:
- Around line 326-329: StartFetch currently routes any non-HEAD method to
HTTPGet, causing unsupported methods (POST/PUT/DELETE) to be sent as GET; update
StartFetch to validate FMethod explicitly: handle 'HEAD' with HTTPHead and 'GET'
with HTTPGet, and for any other FMethod set Completion.Response to an error
response (or raise/return an error) with a clear message indicating unsupported
HTTP method (include FMethod in the message) instead of silently falling back to
GET; references: StartFetch, FMethod, HTTPHead, HTTPGet, Completion.Response.
---
Nitpick comments:
In `@docs/built-ins.md`:
- Line 753: The long single sentence under the "GocciaScript differences" line
is packing multiple concepts; split it into a short introductory sentence and a
bulleted list that separately documents: HTTP method support (only GET and
HEAD), absent features (no Request object, AbortSignal, streaming body, or
CORS), runtime behavior (fetch runs on fetch-specific background workers and
Promises settle on the owning runtime thread with await pumping completions),
the concurrency cap (runtime caps active fetch workers at 16 and additional
calls reject with TypeError), and the configuration requirement (requires
--allowed-host); update the paragraph around the "GocciaScript differences:"
header accordingly to improve scannability.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 57a7fd29-e553-469f-b2d0-7d364ffba50b
📒 Files selected for processing (2)
docs/built-ins.mdsource/units/Goccia.FetchManager.pas
Summary
Testing
Commands run: