-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically select a backend #164
Comments
DifferentiationInterfaceTest.jl already provides the necessary utilities for users to compare and benchmark backends. |
Selecting only happens once, before optimization starts. So for big optimizations, the time cost is negligible. |
My reasoning was: it's very costly and it's a two-liner, so we better let the user do it themselves. However, I guess we could expose an interface of the form: function fastest_backend(backends, scenario)
results = benchmark_differentiation(backends, scenario)
best_trial = argmin(trial -> trial.time, results)
return best_trial.backend # currently doesn't work but only needs minor modifs
end Is that similar to what you had in mind? |
As for a heuristic to select backends, I think benchmarking is indeed more reasonable. We have internal traits to check whether mutation is supported though, we could expose them |
As for listing the available backends, I thought some more and it's not obvious what the right method is. I can check whether ForwardDiff.jl is loaded, but then what is my "prototypical" ForwardDiff backend object: how many chunks does it have? Same for ReverseDiff and compiled tape. |
I see how this would be useful for
Both Guillaume and I favor explicitness in DI and try to avoid macros and generated code. The following would allow you to take the human reading a table of benchmarks out of the loop: backend = autobackend(GradientScenario(f; x=x))
for 1:100000
grad = gradient(f, backend, x)
# ...
end This |
I see I wrote too slowly. 😄 |
I personally would be ok with such a high-level function being "suboptimal", as long as advanced users can manually specify to benchmark several ForwardDiff backends with different chuck sizes. |
I'm sure, it would be helpful for new AD users, but what I wish for is a |
I think implementing this would make more sense for downstream packages that take DI as a dep. Since this automatic selection is heavily influenced by the problem type you have. |
Adrian and I are both against magic tricks like memoization, so if we do offer this functionality it will be a separate choice function, not a backend object with a hidden mechanism. But at the moment it doesn't fit well within our benchmark framework so I would leave it to downstream users |
Closing this issue since the latest version of DIT (since #257) includes the specification of benchmarking results in its public API. Users are free to write a 3-line code to select the best backend for their own criteria. |
It would be great to have an API to automatically select a backend. It can be a function that try each backend and return the fastest workable one.
One of its usages could be in https://github.com/SciML/SciMLSensitivity.jl/blob/master/src/concrete_solve.jl
The text was updated successfully, but these errors were encountered: