Avoid specializing all of ForwardDiff on every equation #37
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
ForwardDiff quite aggressively specializes most of its functions on the
concrete input function type. This gives a slight performance
improvement but it also means that a significant chunk of code has to be
compiled for every call to
ForwardDiffwith a new function.Previously, for every equation in a model we would call
ForwardDiff.gradientwith the julia function corresponding to thatequation. This would then compile the ForwardDiff functions for all of
these julia functions.
Looking at the specializations generated by a model, we see:
which are all identical methods compiled for different equations.
In this PR, we instead "hide" all the concrete functions for every equation
between a common "wrapper functions". This means that only one
specialization of the ForwardDiff functions gets compiled.
Using the following benchmark script:
This PR has the following changes:
eval_RJ: 11.47s -> 4.97seval_RJ: 550μs -> 590μsSo there seems to be about a 10% runtime performance in the
eval_RJcall but the latency is drastically reduced.