You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While there certainly have been similar feature requests that were all shot down (#135, #153, #170 etc.) I believe this one is materially different. It is also somewhat related to #660, but again different.
While hyperfine has rudimentary support for measuring detailed process times (user/system time) and displaying them along the wall clock time in the CLI report, they are not the primary metric (the one whose mean+deviation is shown, the one that hyperfine uses for statistical analysis and also presumably for outlier detection etc.).
I propose to implement a fixed set of additional metrics that could be selected instead of wall time as the "benchmark target", i.e. the primary measurement that is fed into all of hyperfine's machinery.
Obvious ideas are total CPU time, user time and system time (e.g. if I wanted to measure multiprocessing overhead, I would run hyperfine with the parallelism factor as the parameter and CPU time as the metric).
Less obvious metrics are various hardware performance counters as measured by perf — for instance, instruction count, cycles count, or perhaps just arbitrary perf expressions. Naturally, these would be Linux-only. The instruction count, in particular, is the metric that the Rust compiler team uses to benchmark rustc (ref.), so there is certainly some value in adopting it or similar.
The text was updated successfully, but these errors were encountered:
intelfx
changed the title
RFE: using advanced metrics beyond wall time (CPU time, instruction count, ...)
RFE: advanced metrics beyond wall time (CPU time, instruction count, ...)
Mar 21, 2024
While there certainly have been similar feature requests that were all shot down (#135, #153, #170 etc.) I believe this one is materially different. It is also somewhat related to #660, but again different.
While hyperfine has rudimentary support for measuring detailed process times (user/system time) and displaying them along the wall clock time in the CLI report, they are not the primary metric (the one whose mean+deviation is shown, the one that hyperfine uses for statistical analysis and also presumably for outlier detection etc.).
I propose to implement a fixed set of additional metrics that could be selected instead of wall time as the "benchmark target", i.e. the primary measurement that is fed into all of hyperfine's machinery.
Obvious ideas are total CPU time, user time and system time (e.g. if I wanted to measure multiprocessing overhead, I would run hyperfine with the parallelism factor as the parameter and CPU time as the metric).
Less obvious metrics are various hardware performance counters as measured by
perf
— for instance, instruction count, cycles count, or perhaps just arbitrary perf expressions. Naturally, these would be Linux-only. The instruction count, in particular, is the metric that the Rust compiler team uses to benchmarkrustc
(ref.), so there is certainly some value in adopting it or similar.The text was updated successfully, but these errors were encountered: