Fix microtask drain re-entrancy crashing async-generator yield rejection#465
Fix microtask drain re-entrancy crashing async-generator yield rejection#465
Conversation
TGocciaMicrotaskQueue.DrainQueue kept already-processed tasks in the queue until a final FQueue.Clear at the outermost drain's exit. When a microtask handler triggered a recursive drain — for example via AwaitValue on a settled promise calling DrainMicrotasksAndFetchCompletions — the inner drain re-iterated from index 0 and re-ran every prior task. Async generators yielding a rejected promise (or yielding via yield* over a non-iterable) hit this on every test262 run: iter.next() #1 awaited the yielded rejection, the outer Promise rejected, the .catch handler fired in the next drain, called iter.next() #2, which in turn drained microtasks... until the call stack overflowed (SIGSEGV). DrainQueue now pops each task with FQueue.Delete(0) before invoking it, so recursive drains only see remaining or newly-enqueued tasks. Also updates scripts/test262_syntax_filter.py to mark generators, async-generator, WeakMap, WeakSet, and symbols-as-weakmap-keys as supported (shipped in #432, #437) and removes the syntax patterns and SKIP_PATH_SEGMENTS that previously hid those tests from the suite. test262 (bytecode mode, ASI + compat-var/function/Function on): Eligible 39,025 / 47,416 — passes 19,470 → 19,595, errors 465 → 0. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughEnables generator-related and WeakMap/WeakSet syntax in the test filter, rewrites the microtask queue drain to avoid reprocessing tasks during re-entrancy, and adds regression tests for microtask ordering and async-generator rejection/iteration edge cases. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Review rate limit: 4/5 reviews remaining, refill in 12 minutes. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
source/units/Goccia.MicrotaskQueue.pas (1)
104-109: AvoidDelete(0)on every microtask.Line 109 shifts the entire
TListon each iteration, so drainingnmicrotasks becomes O(n²). That’s fine for landing the correctness fix, but Promise-heavy workloads can turn this into a hotspot. A head index or queue/deque structure would keep the re-entrancy fix without the per-item memmove.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/units/Goccia.MicrotaskQueue.pas` around lines 104 - 109, The loop currently calls FQueue.Delete(0) which shifts the whole TList each iteration (O(n²)); change the queue to use a head index to make pops O(1): add a field like FHead: Integer, push items with FQueue.Add as before, read the next task with Task := FQueue[FHead] and then increment FHead instead of calling FQueue.Delete(0); update the loop condition to while FHead < FQueue.Count do (or use Count = FQueue.Count - FHead), and when FHead = FQueue.Count reset FQueue.Clear and FHead := 0 (or periodically compact by moving remaining items to index 0 when FHead grows past a threshold). Apply these changes around the existing CheckExecutionTimeout / CheckInstructionLimit calls so behavior is unchanged but per-item memmoves are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/language/async-generators/yield-rejected-promise.js`:
- Around line 26-34: The current promise only handles the rejection path of the
first iter.next(), so if iter.next() unexpectedly fulfills the test will hang;
modify the first iter.next() call to handle both fulfillment and rejection: on
fulfillment immediately reject (or throw) the outer promise with a clear error
(e.g. "first iter.next() unexpectedly fulfilled"), and on rejection continue to
capture firstRejection and chain the subsequent iter.next() as before
(references: iter.next(), firstRejection, chainSettled).
---
Nitpick comments:
In `@source/units/Goccia.MicrotaskQueue.pas`:
- Around line 104-109: The loop currently calls FQueue.Delete(0) which shifts
the whole TList each iteration (O(n²)); change the queue to use a head index to
make pops O(1): add a field like FHead: Integer, push items with FQueue.Add as
before, read the next task with Task := FQueue[FHead] and then increment FHead
instead of calling FQueue.Delete(0); update the loop condition to while FHead <
FQueue.Count do (or use Count = FQueue.Count - FHead), and when FHead =
FQueue.Count reset FQueue.Clear and FHead := 0 (or periodically compact by
moving remaining items to index 0 when FHead grows past a threshold). Apply
these changes around the existing CheckExecutionTimeout / CheckInstructionLimit
calls so behavior is unchanged but per-item memmoves are removed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 99c09c90-7f27-4d66-9f92-a8361ec63553
📒 Files selected for processing (4)
scripts/test262_syntax_filter.pysource/units/Goccia.MicrotaskQueue.pastests/built-ins/Promise/microtask-ordering.jstests/language/async-generators/yield-rejected-promise.js
Benchmark Results407 benchmarks Interpreted: 🟢 60 improved · 🔴 38 regressed · 309 unchanged · avg +1.3% arraybuffer.js — Interp: 🟢 3, 11 unch. · avg +3.1% · Bytecode: 🔴 11, 3 unch. · avg -6.6%
arrays.js — Interp: 🟢 4, 🔴 1, 14 unch. · avg +1.5% · Bytecode: 🔴 14, 5 unch. · avg -5.2%
async-await.js — Interp: 🔴 3, 3 unch. · avg -1.6% · Bytecode: 🔴 3, 3 unch. · avg -8.5%
async-generators.js — Interp: 2 unch. · avg -0.2% · Bytecode: 🔴 1, 1 unch. · avg -13.0%
base64.js — Interp: 🟢 3, 🔴 2, 5 unch. · avg +0.6% · Bytecode: 🔴 5, 5 unch. · avg -3.1%
classes.js — Interp: 🟢 1, 🔴 1, 29 unch. · avg -0.3% · Bytecode: 🟢 10, 🔴 2, 19 unch. · avg +3.2%
closures.js — Interp: 🟢 1, 🔴 1, 9 unch. · avg -0.8% · Bytecode: 🔴 6, 5 unch. · avg -3.0%
collections.js — Interp: 🔴 1, 11 unch. · avg +0.0% · Bytecode: 🔴 12 · avg -12.3%
csv.js — Interp: 🟢 1, 🔴 1, 11 unch. · avg -1.1% · Bytecode: 🟢 3, 🔴 1, 9 unch. · avg +0.2%
destructuring.js — Interp: 22 unch. · avg +0.5% · Bytecode: 🟢 1, 🔴 7, 14 unch. · avg -2.1%
fibonacci.js — Interp: 8 unch. · avg +0.5% · Bytecode: 🔴 6, 2 unch. · avg -7.6%
float16array.js — Interp: 🟢 8, 🔴 3, 21 unch. · avg +1.1% · Bytecode: 🟢 4, 🔴 21, 7 unch. · avg -7.2%
for-of.js — Interp: 7 unch. · avg +1.7% · Bytecode: 🔴 6, 1 unch. · avg -9.8%
generators.js — Interp: 🔴 1, 3 unch. · avg -2.5% · Bytecode: 🔴 4 · avg -11.8%
iterators.js — Interp: 🟢 3, 🔴 4, 35 unch. · avg +0.1% · Bytecode: 🟢 15, 🔴 8, 19 unch. · avg +1.6%
json.js — Interp: 🟢 4, 16 unch. · avg +1.9% · Bytecode: 🔴 5, 15 unch. · avg -3.9%
jsx.jsx — Interp: 🟢 2, 🔴 5, 14 unch. · avg -0.9% · Bytecode: 🟢 7, 🔴 3, 11 unch. · avg +1.6%
modules.js — Interp: 9 unch. · avg -1.3% · Bytecode: 🔴 7, 2 unch. · avg -6.7%
numbers.js — Interp: 🔴 1, 10 unch. · avg -1.9% · Bytecode: 🟢 1, 🔴 4, 6 unch. · avg -3.5%
objects.js — Interp: 🔴 3, 4 unch. · avg -2.0% · Bytecode: 🟢 3, 4 unch. · avg +2.3%
promises.js — Interp: 🔴 1, 11 unch. · avg -0.9% · Bytecode: 🔴 1, 11 unch. · avg -0.2%
regexp.js — Interp: 11 unch. · avg -0.1% · Bytecode: 🟢 2, 🔴 2, 7 unch. · avg -0.9%
strings.js — Interp: 🟢 5, 14 unch. · avg +0.9% · Bytecode: 🔴 14, 5 unch. · avg -4.3%
tsv.js — Interp: 🟢 1, 🔴 1, 7 unch. · avg -0.1% · Bytecode: 🟢 7, 2 unch. · avg +5.5%
typed-arrays.js — Interp: 🟢 9, 🔴 5, 8 unch. · avg -0.6% · Bytecode: 🟢 8, 🔴 8, 6 unch. · avg -4.4%
uint8array-encoding.js — Interp: 🟢 6, 🔴 3, 9 unch. · avg +1.3% · Bytecode: 🟢 14, 🔴 1, 3 unch. · avg +49.1%
weak-collections.js — Interp: 🟢 9, 🔴 1, 5 unch. · avg +31.3% · Bytecode: 🟢 1, 🔴 13, 1 unch. · avg -28.4%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
Suite Timing
Measured on ubuntu-latest x64. |
- yield-rejected-promise.js: the first iter.next() handler only covered the rejection path, so an engine regression that fulfilled instead would hang the test until the runner's timeout. Now both branches are registered: fulfillment fails the outer Promise with a clear error, rejection captures firstRejection and chains the next call as before. - Goccia.MicrotaskQueue.pas: replace TList.Delete(0) (O(n) shift per pop, O(n^2) per drain) with a head index. FHead advances as tasks run and the underlying list is compacted (Clear + reset) once the queue drains. HasPending and ClearQueue follow the head; the re-entrancy guarantee from the previous commit is preserved because nested drains see only tasks past FHead. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
DrainQueue— for example, an async generator that yields a rejected promise, whereiter.next()'s rejection handler synchronously callsiter.next()again. The previousTGocciaMicrotaskQueue.DrainQueuekept already-processed tasks in the queue until the outermost drain's finalFQueue.Clear, so any nested drain re-iterated from index 0 and re-ran every prior task. The fix pops each task withFQueue.Delete(0)before invoking it, so recursive drains only see remaining or newly-enqueued tasks.scripts/test262_syntax_filter.pyto markgenerators,async-generator,WeakMap,WeakSet, andsymbols-as-weakmap-keysas supported features (shipped via Add generator and async generator support #432 and Implement WeakMap and WeakSet built-ins #437) and removes the syntax patterns andSKIP_PATH_SEGMENTSentries that previously hid those tests from the suite.yield Promise.reject(…),yield*of an iterable whose[Symbol.iterator]returns null, generator that throws synchronously,iter.next()from a rejection handler).test262 baseline (bytecode, ASI +
--compat-var/--compat-function/--unsafe-function-constructor)All 465 previously-crashing async-generator tests now run to completion. 253 of them pass; the rest fail cleanly with assertion errors (mostly spec-compliance gaps in
AsyncGeneratorYield's rejection handling — separate work).Test plan
./build.pas testrunner loader— both binaries build clean./build/GocciaTestRunner tests --asi --unsafe-ffi --no-progress— 8195 / 8200 pass (5 pre-existing FFI fixture failures, unchanged)async function* () { yield Promise.reject(0); yield 1; }withiter.next().catch(() => iter.next())) no longer crashes the bytecode VM🤖 Generated with Claude Code