-
Notifications
You must be signed in to change notification settings - Fork 808
feat(reexecution/c): add metervm metrics #4369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds metervm metrics to the re-execution test to provide more precise performance assessment by tracking time spent in block parsing/verification/acceptance. Changes the metric unit from mGas/s
to ms/gGas
for better metric comparison.
- Integrates metervm to track block processing phases
- Changes metric reporting from throughput to time-per-unit format
- Moves metrics handling to a dedicated file for better organization
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
tests/reexecute/c/vm_reexecute_test.go | Adds metervm integration and removes old metrics functions |
tests/reexecute/c/metrics.go | New file containing metrics collection and reporting logic |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Can we use mgas/s and ggas/s rather than camel case? That's how I've typically seen it and follows the current use of |
Note: need to ensure that the newly added metrics show up as desired in the benchmarks page. |
This reverts commit 613219c.
Why this should be merged
As mentioned in #4282, it would be great if we could have a more precise assessment for the re-execution test rather than printing the average
mGas/s
for the entire benchmark. The starting point point for increasing the precision of the re-execution test would be to record the time spent in block parsing/verification/acceptance (whose sum should equal the total re-execution benchmark time).How this works
ms/gGas
to allow for a more straightforward comparison between metrics.How this was tested
CI + ran re-execution test locally to verify that the sum of block parsing/verification/acceptance is just about equal to the total re-execution benchmark time.
Need to be documented in RELEASES.md?
No