You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some operations that you want to benchmark, there is unavoidable overhead involved. For example, extra method calls.
This is particularly problematic in nano-benchmarks, where a single method call of overhead can ruin your results. But can also apply to micro-benchmarks and other situations.
I suggest that there be the option to remove overhead from results prior to scaling.
One way to achieve this would be to identify one of the benchmarked methods as a special baseline using an attribute and then to add a new normalized column to the output that is the mean minus the mean of the special baseline. The scaled column would then apply to the normalized results instead of the mean.
I propose calling this special baseline an additive baseline in contrast to the normal baseline, which I'm referring to as a multiplicative baseline. However, it could also be identified as Overhead, Normalizer, or something similar.
The results, with Direct being an additive baseline and NormalCast being the multiplicative baseline (the existing baseline), might look something like this:
I like this idea! And it should be easy to implement.
The main point of discussion here is the API and it will be look. If we keep the "Scaled" title, it could confuse people who know that "Scaled" means "Scaled Mean". Now we have column legends, so we will be able to explain new column, but I still think that we should modife the title somehow.
@AndreyAkinshin@danielcrabtree i like to work on this issue .
Where my understanding is renaming the column Scaled to Scaled Mean.
Please let me know if I need to do other changes.
For some operations that you want to benchmark, there is unavoidable overhead involved. For example, extra method calls.
This is particularly problematic in nano-benchmarks, where a single method call of overhead can ruin your results. But can also apply to micro-benchmarks and other situations.
I ran into this when comparing the cost of dynamic variant casting to explicit variant casting. With overhead,
dynamic
is 2.75x faster, but without overhead,dynamic
is 4.85x faster.I suggest that there be the option to remove overhead from results prior to scaling.
One way to achieve this would be to identify one of the benchmarked methods as a special baseline using an attribute and then to add a new normalized column to the output that is the mean minus the mean of the special baseline. The scaled column would then apply to the normalized results instead of the mean.
I propose calling this special baseline an additive baseline in contrast to the normal baseline, which I'm referring to as a multiplicative baseline. However, it could also be identified as
Overhead
,Normalizer
, or something similar.The results, with
Direct
being an additive baseline andNormalCast
being the multiplicative baseline (the existing baseline), might look something like this:The text was updated successfully, but these errors were encountered: