Report bytecode timing phases separately#198
Conversation
- Track lex, parse, compile, and execute timings in runner results - Update benchmark, test, and script-loader reports to show the new breakdown - Include compile timing in JSON output and PR summary tables
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 45 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughIntroduces fine-grained timing for lex, parse, compile, and execute phases across benchmark, test, script-loading, and reporting code; updates JSON output and console formatting; and adjusts the GitHub Actions PR comment to display the new per-phase timing metrics. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
ScriptLoader.dpr (1)
272-276:⚠️ Potential issue | 🔴 CriticalMissing
CompileTimeNanosecondsassignment in interpreted mode causes uninitialized data in JSON output.The
ExecuteInterpretedfunction assigns all timing fields exceptCompileTimeNanoseconds. In Free Pascal, local record fields are not automatically zero-initialized, soResult.Timing.CompileTimeNanosecondswill contain garbage from the stack. This will cause incorrect JSON output (e.g.,"compile_ms": <random large number>) when running in interpreted mode.🐛 Proposed fix: Initialize CompileTimeNanoseconds to 0
Result.ResultValue := ScriptResult.Result; Result.Timing.LexTimeNanoseconds := ScriptResult.LexTimeNanoseconds; Result.Timing.ParseTimeNanoseconds := ScriptResult.ParseTimeNanoseconds; + Result.Timing.CompileTimeNanoseconds := 0; Result.Timing.ExecuteTimeNanoseconds := ScriptResult.ExecuteTimeNanoseconds; Result.Timing.TotalTimeNanoseconds := ScriptResult.TotalTimeNanoseconds;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ScriptLoader.dpr` around lines 272 - 276, ExecuteInterpreted leaves Result.Timing.CompileTimeNanoseconds uninitialized, producing garbage in JSON; update the ExecuteInterpreted function to explicitly set Result.Timing.CompileTimeNanoseconds := 0 (alongside the other timing assignments such as Result.Timing.LexTimeNanoseconds, ParseTimeNanoseconds, ExecuteTimeNanoseconds, TotalTimeNanoseconds) so the CompileTimeNanoseconds field is zeroed in interpreted mode and JSON output is correct.TestRunner.dpr (1)
144-152:⚠️ Potential issue | 🔴 CriticalInitialize
CompileTimeNanosecondsinTGocciaEngine.Execute.The
Executefunction fails to initializeCompileTimeNanosecondsin the result record. It setsLexTimeNanoseconds,ParseTimeNanoseconds,ExecuteTimeNanoseconds, andTotalTimeNanoseconds, but leavesCompileTimeNanosecondsuninitialized. WhenResult.Timing := EngineResultcopies the entire record in TestRunner.dpr (line 148), the garbage value propagates to the test result. The exception path avoids this by callingMakeEmptyTestResult, which properly initializes all fields to zero.Add initialization after line 813 in
units/Goccia.Engine.pas:Result.FileName := FFileName; +Result.CompileTimeNanoseconds := 0; StartTime := GetNanoseconds;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@TestRunner.dpr` around lines 144 - 152, TGocciaEngine.Execute currently leaves CompileTimeNanoseconds uninitialized in its result record; update Execute to explicitly set CompileTimeNanoseconds := 0 (or initialize it from the correct measured value if intended) before returning the EngineResult so that Result.Timing := EngineResult does not copy garbage. Locate the Execute method in TGocciaEngine (units/Goccia.Engine.pas), mirror the pattern used for LexTimeNanoseconds/ParseTimeNanoseconds/ExecuteTimeNanoseconds/TotalTimeNanoseconds, and ensure the field is initialized similarly (or use MakeEmptyTestResult as a reference for zeroing timing fields) so tests receive a valid CompileTimeNanoseconds value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@ScriptLoader.dpr`:
- Around line 272-276: ExecuteInterpreted leaves
Result.Timing.CompileTimeNanoseconds uninitialized, producing garbage in JSON;
update the ExecuteInterpreted function to explicitly set
Result.Timing.CompileTimeNanoseconds := 0 (alongside the other timing
assignments such as Result.Timing.LexTimeNanoseconds, ParseTimeNanoseconds,
ExecuteTimeNanoseconds, TotalTimeNanoseconds) so the CompileTimeNanoseconds
field is zeroed in interpreted mode and JSON output is correct.
In `@TestRunner.dpr`:
- Around line 144-152: TGocciaEngine.Execute currently leaves
CompileTimeNanoseconds uninitialized in its result record; update Execute to
explicitly set CompileTimeNanoseconds := 0 (or initialize it from the correct
measured value if intended) before returning the EngineResult so that
Result.Timing := EngineResult does not copy garbage. Locate the Execute method
in TGocciaEngine (units/Goccia.Engine.pas), mirror the pattern used for
LexTimeNanoseconds/ParseTimeNanoseconds/ExecuteTimeNanoseconds/TotalTimeNanoseconds,
and ensure the field is initialized similarly (or use MakeEmptyTestResult as a
reference for zeroing timing fields) so tests receive a valid
CompileTimeNanoseconds value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 80e51cee-5f0a-4b3e-9706-c877fa1286f6
📒 Files selected for processing (7)
.github/workflows/pr.ymlBenchmarkRunner.dprScriptLoader.dprTestRunner.dprunits/Goccia.Benchmark.Reporter.pasunits/Goccia.Engine.pasunits/Goccia.ScriptLoader.JSON.pas
Suite Timing
Measured on ubuntu-latest x64. |
Benchmark Results274 benchmarks Interpreted: 🟢 14 improved · 🔴 236 regressed · 24 unchanged · avg -4.1% arraybuffer.js — Interp: 🟢 6, 🔴 6, 2 unch. · avg -1.1% · Bytecode: 🟢 2, 🔴 4, 8 unch. · avg -0.4%
arrays.js — Interp: 🔴 18, 1 unch. · avg -4.0% · Bytecode: 🟢 9, 🔴 4, 6 unch. · avg +1.5%
async-await.js — Interp: 🔴 6 · avg -7.5% · Bytecode: 🟢 2, 4 unch. · avg +0.5%
classes.js — Interp: 🔴 31 · avg -5.6% · Bytecode: 🔴 14, 17 unch. · avg -2.6%
closures.js — Interp: 🔴 11 · avg -5.2% · Bytecode: 🔴 3, 8 unch. · avg -1.5%
collections.js — Interp: 🔴 12 · avg -3.3% · Bytecode: 🔴 7, 5 unch. · avg -2.0%
destructuring.js — Interp: 🔴 21, 1 unch. · avg -5.0% · Bytecode: 🔴 9, 13 unch. · avg -1.5%
fibonacci.js — Interp: 🔴 8 · avg -3.8% · Bytecode: 🔴 6, 2 unch. · avg -1.9%
for-of.js — Interp: 🔴 7 · avg -5.7% · Bytecode: 🔴 7 · avg -2.9%
helpers/bench-module.js — Interp: 0 · Bytecode: 0
iterators.js — Interp: 🔴 19, 1 unch. · avg -4.1% · Bytecode: 🟢 8, 🔴 4, 8 unch. · avg +0.8%
json.js — Interp: 🔴 13, 7 unch. · avg -2.3% · Bytecode: 🔴 11, 9 unch. · avg -1.8%
jsx.jsx — Interp: 🔴 21 · avg -5.8% · Bytecode: 🔴 8, 13 unch. · avg -1.7%
modules.js — Interp: 🟢 2, 🔴 3, 4 unch. · avg -0.6% · Bytecode: 🟢 1, 🔴 3, 5 unch. · avg -0.2%
numbers.js — Interp: 🟢 1, 🔴 10 · avg -4.0% · Bytecode: 🟢 1, 🔴 4, 6 unch. · avg +0.6%
objects.js — Interp: 🔴 6, 1 unch. · avg -2.7% · Bytecode: 🔴 2, 5 unch. · avg -2.1%
promises.js — Interp: 🔴 10, 2 unch. · avg -4.3% · Bytecode: 🔴 6, 6 unch. · avg -1.9%
regexp.js — Interp: 🟢 3, 🔴 8 · avg -1.6% · Bytecode: 🔴 4, 7 unch. · avg -1.5%
strings.js — Interp: 🟢 1, 🔴 10 · avg -5.3% · Bytecode: 🔴 5, 6 unch. · avg -1.5%
typed-arrays.js — Interp: 🟢 1, 🔴 16, 5 unch. · avg -3.6% · Bytecode: 🟢 3, 🔴 13, 6 unch. · avg -0.1%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
- Set `CompileTimeNanoseconds` to `0` for interpreted execution paths - Keep timing records consistent in `ScriptLoader` and engine results
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ScriptLoader.dpr (1)
354-357:⚠️ Potential issue | 🟡 MinorMissing
CompileTimeNanoseconds := 0initialization.
ExecuteBytecodeFromFilesets all timing fields exceptCompileTimeNanoseconds. In Object Pascal, record result fields are not auto-initialized, so this may produce garbage values in JSON output (compile_ms) or console display.Since precompiled
.gbcfiles have no compilation phase at load time, this should be explicitly set to 0 for consistency withExecuteInterpreted.Proposed fix
Result.Timing.LexTimeNanoseconds := 0; Result.Timing.ParseTimeNanoseconds := 0; + Result.Timing.CompileTimeNanoseconds := 0; Result.Timing.ExecuteTimeNanoseconds := ExecEnd - LoadEnd; Result.Timing.TotalTimeNanoseconds := ExecEnd - StartTime;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ScriptLoader.dpr` around lines 354 - 357, In ExecuteBytecodeFromFile set Result.Timing.CompileTimeNanoseconds := 0 so the CompileTime field is explicitly initialized; locate the Result.Timing assignments in ExecuteBytecodeFromFile (the block setting LexTimeNanoseconds, ParseTimeNanoseconds, ExecuteTimeNanoseconds, TotalTimeNanoseconds) and add CompileTimeNanoseconds := 0 to ensure JSON/console output (compile_ms) is not garbage for precompiled .gbc loads.
🧹 Nitpick comments (1)
ScriptLoader.dpr (1)
474-480: Consider accounting forCompileTimeNanosecondsin error-path timing.The exception handler computes
ExecuteTimeNanosecondsasTotal - Lex - Parse, which would now include compile time if an error occurs during or after execution. For consistency with the new phase separation:Proposed fix
if Report.Timing.TotalTimeNanoseconds > - Report.Timing.LexTimeNanoseconds + Report.Timing.ParseTimeNanoseconds then + Report.Timing.LexTimeNanoseconds + Report.Timing.ParseTimeNanoseconds + + Report.Timing.CompileTimeNanoseconds then Report.Timing.ExecuteTimeNanoseconds := Report.Timing.TotalTimeNanoseconds - Report.Timing.LexTimeNanoseconds - - Report.Timing.ParseTimeNanoseconds; + Report.Timing.ParseTimeNanoseconds - + Report.Timing.CompileTimeNanoseconds;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ScriptLoader.dpr` around lines 474 - 480, The exception handler's timing math currently sets Report.Timing.ExecuteTimeNanoseconds := TotalTimeNanoseconds - LexTimeNanoseconds - ParseTimeNanoseconds which incorrectly folds compile time into execute time on error; modify the logic in the exception path that updates Report.Timing.ExecuteTimeNanoseconds (the block using GetNanoseconds - StartTime and the subsequent if) to also subtract Report.Timing.CompileTimeNanoseconds (i.e., Compute Execute = Total - Lex - Parse - Compile) and ensure any conditional that checks Total > Lex + Parse is updated to account for CompileTimeNanoseconds so ExecuteTimeNanoseconds never absorbs compile time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@ScriptLoader.dpr`:
- Around line 354-357: In ExecuteBytecodeFromFile set
Result.Timing.CompileTimeNanoseconds := 0 so the CompileTime field is explicitly
initialized; locate the Result.Timing assignments in ExecuteBytecodeFromFile
(the block setting LexTimeNanoseconds, ParseTimeNanoseconds,
ExecuteTimeNanoseconds, TotalTimeNanoseconds) and add CompileTimeNanoseconds :=
0 to ensure JSON/console output (compile_ms) is not garbage for precompiled .gbc
loads.
---
Nitpick comments:
In `@ScriptLoader.dpr`:
- Around line 474-480: The exception handler's timing math currently sets
Report.Timing.ExecuteTimeNanoseconds := TotalTimeNanoseconds -
LexTimeNanoseconds - ParseTimeNanoseconds which incorrectly folds compile time
into execute time on error; modify the logic in the exception path that updates
Report.Timing.ExecuteTimeNanoseconds (the block using GetNanoseconds - StartTime
and the subsequent if) to also subtract Report.Timing.CompileTimeNanoseconds
(i.e., Compute Execute = Total - Lex - Parse - Compile) and ensure any
conditional that checks Total > Lex + Parse is updated to account for
CompileTimeNanoseconds so ExecuteTimeNanoseconds never absorbs compile time.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 78c8ca12-fb88-43f9-9ea2-790082477471
📒 Files selected for processing (2)
ScriptLoader.dprunits/Goccia.Engine.pas
🚧 Files skipped from review as they are similar to previous changes (1)
- units/Goccia.Engine.pas
- zero `CompileTimeNanoseconds` in the load path when compilation is skipped - subtract compile time when deriving execute time for error reports
Summary
CompileTimeNanosecondsto shared timing records and surfaced it in JSON output and human-readable reports.Testing
./build.pas testrunner && ./build/TestRunner tests../build.pas benchmarkrunner && ./build/BenchmarkRunner benchmarks --format=jsonto verify the new timing fields../build.pas loader && ./build/ScriptLoader example.js --mode=bytecodeto confirm bytecode timing output includes lex/parse/compile/execute phases.Summary by CodeRabbit
New Features
Improvements