New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: @benchmark f() g()
#239
Comments
Take a look at https://juliaci.github.io/BenchmarkTools.jl/dev/manual/#Handling-benchmark-results especially |
Perhaps this workflow is common enough to let Additionally, is there some way to take advantage of knowing the primary goal of a benchmark is to compare 2 functions by, for example, randomly alternating samples or blocks of samples? |
I rather not overcomplicate the
Hm not currently, I don't know if that would help or hurt. The branch predictor would learn that pattern. |
Perhaps a documentation solution then? When I opened this issue I had already loosely read/closely skimmed the (mercifully short!) manual cover to cover, but found the benchmarkgroups and judge sections a bit intimidating and didn't put together |
Improving the docs would be fantastic! If you have the time maybe you can take a stab at it? |
Maybe there's some inspiration we can take from |
I think #256 would be a reasonable solution |
viraltux commented this:
And I concur. In benchmarking and optimizing a function, I often define function_old() and function_new() and check if changes to function_new() have the runtime impact I expect. In a benchmarking package, ideally I can perform that comparison correctly, easily, quickly, and precisely. A well crafted varargs @benchmark that supports @benchmark function_old() function_new() would be ideal.
This extension has the additional potential to help users like me avoid common benchmark comparison pitfalls like those discussed in the linked discourse thread
The text was updated successfully, but these errors were encountered: