Add instruction counter / interrupt handler with configurable limit#349
Add instruction counter / interrupt handler with configurable limit#349
Conversation
- Add `--max-instructions` CLI support and runtime enforcement - Surface `InstructionLimitError` in JSON output and docs - Cover interpreter, VM, and queue checkpoints
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdds a per-execution instruction-count limit: new Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI/Options
participant Runner as ScriptLoader/Runner
participant Limit as InstructionLimit
participant Evaluator as Evaluator/VM
CLI->>Runner: --max-instructions = N
Runner->>Limit: StartInstructionLimit(N)
Note over Limit: Set GMaxInstructions = N\nGInstructionCount = 0
rect rgba(100,150,200,0.5)
Runner->>Evaluator: Execute script/module
loop execution steps
Evaluator->>Limit: IncrementInstructionCounter()
Note over Limit: GInstructionCount++
Evaluator->>Limit: CheckInstructionLimit()
alt GInstructionCount >= GMaxInstructions
Limit-->>Evaluator: Raise TGocciaInstructionLimitError
Evaluator-->>Runner: Propagate error / abort
end
end
end
Runner->>Limit: ClearInstructionLimit()
Note over Limit: Reset GMaxInstructions = 0\nGInstructionCount = 0
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
…rupt-counter # Conflicts: # AGENTS.md # source/shared/CLI.Options.pas
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@source/shared/CLI.Options.pas`:
- Around line 472-473: Update the help text for the FMaxInstructions option to
be mode-neutral: locate the TGocciaIntegerOption instance named FMaxInstructions
and change its description from "Maximum bytecode instructions before aborting"
to a neutral phrase such as "Maximum instructions before aborting" or "Maximum
instruction steps before aborting" so it correctly applies to both interpreted
and bytecode modes.
In `@source/units/Goccia.InstructionLimit.pas`:
- Around line 36-45: IncrementInstructionCounter currently increments
GInstructionCount even when no limit is set and CheckInstructionLimit uses >=
which aborts at the limit instead of when it's exceeded; to fix, make
IncrementInstructionCounter no-op (skip Inc(GInstructionCount)) when
GMaxInstructions <= 0 to avoid disabled-mode overhead, and change the check
inside CheckInstructionLimit to use > (i.e., raise TGocciaInstructionLimitError
only when GInstructionCount > GMaxInstructions) referencing the
IncrementInstructionCounter and CheckInstructionLimit procedures and the
GInstructionCount/GMaxInstructions globals and TGocciaInstructionLimitError for
locating the code.
In `@source/units/Goccia.MicrotaskQueue.pas`:
- Around line 100-102: TGocciaMicrotaskQueue.DrainQueue currently calls
IncrementInstructionCounter from a shared path used by both the interpreter and
bytecode executor, causing microtask dispatch to consume an extra synthetic
instruction and breaking --max-instructions accuracy; remove or gate the
IncrementInstructionCounter call from the shared DrainQueue path and instead
perform interpreter-only instruction bookkeeping inside the interpreter-specific
code path (i.e., move or wrap IncrementInstructionCounter so it is called only
when executing in the interpreter mode, not when DrainQueue is used by the
bytecode path—refer to TGocciaMicrotaskQueue.DrainQueue,
IncrementInstructionCounter, CheckExecutionTimeout, CheckInstructionLimit and
the bytecode execution flow in Goccia.Engine.pas/Goccia.Evaluator.pas to locate
the proper interpreter-only spot).
In `@source/units/Goccia.VM.pas`:
- Around line 1695-1696: The instruction-count check is done inconsistently: the
counter is incremented at opcode fetch but CheckInstructionLimit is called
inside some handlers (OP_CONSTRUCT, OP_CALL, OP_CALL_METHOD), causing off-by-one
behavior; move enforcement to the dispatch boundary so the limit is checked just
before dispatching the next opcode (where the counter is incremented) and remove
the mid-instruction CheckInstructionLimit calls from those handlers (or adjust
the counter/check contract so handlers cannot abort the already-counted opcode).
Update the code paths that call CheckInstructionLimit (references:
CheckExecutionTimeout, CheckInstructionLimit, OP_CONSTRUCT, OP_CALL,
OP_CALL_METHOD) so the hard limit is evaluated only at the single centralized
dispatch point.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b07bb807-8a38-4e12-beeb-23fd095de530
📒 Files selected for processing (14)
.github/workflows/pr.ymlAGENTS.mddocs/build-system.mddocs/bytecode-vm.mdsource/app/GocciaBenchmarkRunner.dprsource/app/GocciaScriptLoader.dprsource/app/GocciaTestRunner.dprsource/shared/CLI.Options.passource/units/Goccia.Evaluator.passource/units/Goccia.InstructionLimit.passource/units/Goccia.MicrotaskQueue.passource/units/Goccia.ScriptLoader.JSON.passource/units/Goccia.VM.passource/units/Goccia.Values.Primitives.pas
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
source/shared/CLI.Options.pas (1)
131-145:⚠️ Potential issue | 🟠 MajorUse an Int64-backed option type for
--max-instructions.
FMaxInstructionsis wired asTGocciaIntegerOption(32-bit), but the runtime limit API/counter usesInt64. This creates an avoidable range mismatch for large limits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/shared/CLI.Options.pas` around lines 131 - 145, FMaxInstructions is declared and exposed via the MaxInstructions property as TGocciaIntegerOption (32-bit) but the runtime limit API expects Int64; change the field and property to an Int64-backed option type (replace TGocciaIntegerOption with the Int64 variant, e.g., TGocciaInt64Option) for FMaxInstructions and the MaxInstructions property, update any constructor initialization, destruction and Options() registration code that creates or references FMaxInstructions to use the new TGocciaInt64Option API, and search for usages of FMaxInstructions/MaxInstructions elsewhere to ensure code that reads the option now expects Int64 values.
♻️ Duplicate comments (1)
source/shared/CLI.Options.pas (1)
474-475:⚠️ Potential issue | 🟡 MinorMake the help text mode-neutral.
The description says “Maximum bytecode instructions…”, but this limit is enforced in interpreted mode too. Please use neutral wording.
Suggested wording
FMaxInstructions := TGocciaIntegerOption.Create('max-instructions', - 'Maximum bytecode instructions before aborting', 'Engine'); + 'Maximum instructions before aborting', 'Engine');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/shared/CLI.Options.pas` around lines 474 - 475, The help text for the option created by FMaxInstructions (TGocciaIntegerOption.Create('max-instructions', ...)) is currently specific to “bytecode”; change the description string to neutral wording such as "Maximum instructions before aborting" or "Maximum execution instructions before aborting" so it applies to both interpreted and bytecode modes—update the second argument passed to TGocciaIntegerOption.Create accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@source/shared/CLI.Options.pas`:
- Around line 131-145: FMaxInstructions is declared and exposed via the
MaxInstructions property as TGocciaIntegerOption (32-bit) but the runtime limit
API expects Int64; change the field and property to an Int64-backed option type
(replace TGocciaIntegerOption with the Int64 variant, e.g., TGocciaInt64Option)
for FMaxInstructions and the MaxInstructions property, update any constructor
initialization, destruction and Options() registration code that creates or
references FMaxInstructions to use the new TGocciaInt64Option API, and search
for usages of FMaxInstructions/MaxInstructions elsewhere to ensure code that
reads the option now expects Int64 values.
---
Duplicate comments:
In `@source/shared/CLI.Options.pas`:
- Around line 474-475: The help text for the option created by FMaxInstructions
(TGocciaIntegerOption.Create('max-instructions', ...)) is currently specific to
“bytecode”; change the description string to neutral wording such as "Maximum
instructions before aborting" or "Maximum execution instructions before
aborting" so it applies to both interpreted and bytecode modes—update the second
argument passed to TGocciaIntegerOption.Create accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ce73c1ba-c15b-45b1-9597-59fd30933de8
📒 Files selected for processing (6)
.github/workflows/pr.ymlAGENTS.mdsource/app/GocciaBenchmarkRunner.dprsource/app/GocciaScriptLoader.dprsource/app/GocciaTestRunner.dprsource/shared/CLI.Options.pas
✅ Files skipped from review due to trivial changes (3)
- AGENTS.md
- source/app/GocciaScriptLoader.dpr
- source/app/GocciaBenchmarkRunner.dpr
🚧 Files skipped from review as they are similar to previous changes (2)
- source/app/GocciaTestRunner.dpr
- .github/workflows/pr.yml
Suite Timing
Measured on ubuntu-latest x64. |
Benchmark Results386 benchmarks Interpreted: 🟢 340 improved · 🔴 14 regressed · 32 unchanged · avg +9.1% arraybuffer.js — Interp: 🟢 12, 2 unch. · avg +8.4% · Bytecode: 🟢 6, 🔴 1, 7 unch. · avg +2.0%
arrays.js — Interp: 🟢 19 · avg +12.0% · Bytecode: 🟢 11, 8 unch. · avg +3.0%
async-await.js — Interp: 🟢 5, 1 unch. · avg +11.8% · Bytecode: 🟢 4, 2 unch. · avg +4.7%
base64.js — Interp: 🟢 9, 1 unch. · avg +15.8% · Bytecode: 🟢 9, 1 unch. · avg +9.0%
classes.js — Interp: 🟢 24, 7 unch. · avg +8.1% · Bytecode: 🟢 6, 🔴 2, 23 unch. · avg +0.7%
closures.js — Interp: 🟢 11 · avg +12.3% · Bytecode: 🟢 1, 🔴 4, 6 unch. · avg -2.4%
collections.js — Interp: 🟢 12 · avg +10.5% · Bytecode: 🟢 10, 2 unch. · avg +6.5%
csv.js — Interp: 🟢 12, 1 unch. · avg +10.6% · Bytecode: 🟢 12, 1 unch. · avg +7.6%
destructuring.js — Interp: 🟢 22 · avg +11.5% · Bytecode: 🟢 4, 🔴 2, 16 unch. · avg +1.6%
fibonacci.js — Interp: 🟢 8 · avg +11.8% · Bytecode: 🔴 5, 3 unch. · avg -5.5%
float16array.js — Interp: 🟢 23, 🔴 4, 5 unch. · avg +4.9% · Bytecode: 🟢 7, 🔴 15, 10 unch. · avg -8.2%
for-of.js — Interp: 🟢 7 · avg +11.5% · Bytecode: 🟢 1, 🔴 4, 2 unch. · avg -4.7%
helpers/bench-module.js — Interp: 0 · Bytecode: 0
iterators.js — Interp: 🟢 38, 4 unch. · avg +8.2% · Bytecode: 🟢 7, 🔴 3, 32 unch. · avg +0.5%
json.js — Interp: 🟢 20 · avg +12.6% · Bytecode: 🟢 18, 2 unch. · avg +13.0%
jsx.jsx — Interp: 🟢 21 · avg +10.8% · Bytecode: 🟢 8, 13 unch. · avg +3.0%
modules.js — Interp: 🟢 9 · avg +10.7% · Bytecode: 🟢 6, 🔴 3 · avg +0.8%
numbers.js — Interp: 🟢 11 · avg +10.0% · Bytecode: 🟢 3, 🔴 1, 7 unch. · avg +1.8%
objects.js — Interp: 🟢 7 · avg +8.8% · Bytecode: 🟢 6, 1 unch. · avg +6.3%
promises.js — Interp: 🟢 8, 4 unch. · avg +6.0% · Bytecode: 🟢 5, 7 unch. · avg +4.1%
regexp.js — Interp: 🟢 11 · avg +8.4% · Bytecode: 🟢 9, 2 unch. · avg +7.7%
strings.js — Interp: 🟢 16, 🔴 1, 2 unch. · avg +7.6% · Bytecode: 🟢 6, 🔴 7, 6 unch. · avg -11.7%
tsv.js — Interp: 🟢 9 · avg +9.1% · Bytecode: 🟢 7, 2 unch. · avg +7.0%
typed-arrays.js — Interp: 🟢 15, 🔴 5, 2 unch. · avg +3.4% · Bytecode: 🟢 2, 🔴 16, 4 unch. · avg -6.4%
uint8array-encoding.js — Interp: 🟢 11, 🔴 4, 3 unch. · avg +7.3% · Bytecode: 🟢 13, 🔴 1, 4 unch. · avg +26.6%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
- Make --max-instructions help text mode-neutral - Guard IncrementInstructionCounter to skip threadvar access when disabled - Remove synthetic increment from shared MicrotaskQueue.DrainQueue path - Move CheckInstructionLimit to dispatch loop top for exact enforcement, remove redundant mid-instruction checks from backward-jump/call handlers Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add TGocciaInt64Option to support values beyond the 32-bit Integer range (~2.1B). The instruction limit runtime API already uses Int64; this aligns the CLI parsing layer so users can specify limits above 2^31-1. Also document that zero (the default) means unlimited. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The instruction limit and timeout are cross-cutting engine features that work in both interpreter and bytecode modes. Move the primary documentation to embedding.md under a new "Execution Limits" section covering both mechanisms with Pascal API examples. The bytecode-vm.md section now cross-references embedding.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/embedding.md`:
- Line 642: Update the documentation sentence about disabling the instruction
counter to soften the "zero overhead" claim: replace the phrase that says "no
threadvar access, no comparisons, zero overhead" with wording that explains
disabling (GMaxInstructions <= 0) skips counter increments/writes but still
performs guard reads/checks (e.g., limit and counter are still read on the hot
path), and mention TGocciaInstructionLimitError remains raised when the limit is
reached; reference IncrementInstructionCounter, GMaxInstructions and the fact it
skips Inc to avoid threadvar writes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 41f4c7bb-01a7-418f-96a3-d975c6bd582f
📒 Files selected for processing (2)
docs/bytecode-vm.mddocs/embedding.md
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/bytecode-vm.md
The embedding doc should describe the Pascal API; CLI specifics belong in build-system.md. Replace per-mechanism --flag lines with a single cross-reference to the build system docs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
docs/embedding.md (1)
640-640:⚠️ Potential issue | 🟡 MinorSoften the “zero overhead” claim when disabled.
The current sentence overstates behavior. Disabling skips increment writes on the hot path, but guard checks still read/evaluate limit state, so “zero overhead” is inaccurate.Based on learnings:
IncrementInstructionCounterskipsIncwhenGMaxInstructions <= 0, but the limit-check path still exists.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/embedding.md` at line 640, The sentence in docs/embedding.md overstates behavior by claiming "zero overhead" when the instruction counter is disabled; update the wording around TGocciaInstructionLimitError to state that setting GMaxInstructions to zero disables increments (IncrementInstructionCounter skips Inc) but does not eliminate the limit-check path — guard reads/evaluations still occur, so describe it as "minimal" or "no incremental writes on the hot path" instead of "zero overhead" and mention the remaining read/check cost.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@docs/embedding.md`:
- Line 640: The sentence in docs/embedding.md overstates behavior by claiming
"zero overhead" when the instruction counter is disabled; update the wording
around TGocciaInstructionLimitError to state that setting GMaxInstructions to
zero disables increments (IncrementInstructionCounter skips Inc) but does not
eliminate the limit-check path — guard reads/evaluations still occur, so
describe it as "minimal" or "no incremental writes on the hot path" instead of
"zero overhead" and mention the remaining read/check cost.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ab7a2006-1f1d-42cc-9fa3-f5808dab7c14
📒 Files selected for processing (2)
docs/bytecode-vm.mddocs/embedding.md
✅ Files skipped from review due to trivial changes (1)
- docs/bytecode-vm.md
When disabled, IncrementInstructionCounter and CheckInstructionLimit still read the GMaxInstructions threadvar to short-circuit. Describe the actual cost: guard read only, no increments or comparisons. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Describe the dispatch-loop mechanism without referencing the CLI flag; the CLI is a separate layer. Cross-reference embedding.md for the API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
--max-instructions=<N>CLI flag that caps the number of instructions a script may execute before aborting with aTGocciaInstructionLimitErrorGoccia.InstructionLimit.pas) following the same pattern asGoccia.Timeout.pas, with zero overhead when the limit is not setTest plan
./format.pas --checkpassespr.ymlvalidating--max-instructionsin both interpreted and bytecode modes (exit code, JSON error type, error message)🤖 Generated with Claude Code