-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic interpolation to avoid global variable issues #65
Comments
Or, to summarize all of the above: "transforming |
I'm worried the payoff here wouldn't be worth the pain. This is what some older iterations of BenchmarkTools (Benchmarks.jl, BenchmarkTrackers.jl, etc.) tried to do, and I recall it being quite tricky to get right. Note also that the primary reason interpolation exists isn't necessarily for performance reasons, but for transporting locally-scoped variables into benchmark scope (which is always top-level on purpose): julia> using BenchmarkTools
julia> for i in 1:3
@benchmark println(i) evals=1 samples=1
end
ERROR: UndefVarError: i not defined
julia> for i in 1:3
@benchmark println($i) evals=1 samples=1
end
1
1
2
2
3
3 From my experience teaching people BenchmarkTools (or correcting their usage), the interpolation feature isn't hard to use or understand - as soon as users know about it, they pick it up easily. The problem is that people just don't know about it because they don't read the docs. A less heavy-handed solution might be to just put a quick "Quick Start Example" or something in the README, and have it use interpolation everywhere. Or, we could print an (easily disabled via a .juliarc.jl flag) warning on package load. Also, I actually use the interpolation feature pretty often to measure the effects of "toggling" the global-ness of specific variables. This is more useful for package authors than end-users, but if we did automatic interpolation, we'd need some per-variable way of toggling it (maybe just via |
Ok, that all makes sense, thanks. I still want this feature for my own work, so I may try implementing it in a new package (which would use BenchmarkTools under the hood). |
After approximately the zillionth time seeing people get confusing or incorrect benchmark results because they did:
instead of
I started wondering if maybe we could do something to avoid forcing this cognitive burden on users.
As inspiration, I've used the following macro in the unit tests to measure "real" allocations from a single execution of a function:
@wrappedallocs f(x)
turns@allocated f(x)
into something more like:which does the same computation but measures the allocations inside the wrapped function instead of at global scope.
It might be possible to do something like this for benchmarking. This particular implementation is wrong, because
@wrappedallocs f(g(x))
will only measure the allocations off()
notg()
, but a similar approach, involving walking the expression to collect all the symbols and then passing those symbols through a new outer function, might work.The result would be that
would turn into something like
where
@_benchmark
does basically what regular@benchmark
does right now. Passing_f
and_g
as arguments is not necessary if they're regular functions, but it is necessary if they're arbitrary callable objects.The question is: is this a good idea? This makes
BenchmarkTools
more complicated, and might involve too much magic. I also haven't thought through how to integrate this with thesetup
arguments. I'm mostly just interested in seeing if this is something that's worth spending time on.One particular concern I have is if the user tries to benchmark a big block of code, we may end up with the wrapper function taking a ridiculous number of arguments, which I suspect is likely to be handled badly by Julia. Fortunately, the macro can at least detect that case and demand that the user manually splice in their arguments.
The text was updated successfully, but these errors were encountered: