-
-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large allocation count from VariableRateJump #969
Comments
Did you try |
Good thought, I added that/edited the above, and the allocations still occur unfortunately. In the MWE above, I was already doing In my actual system I need saving of course, but it's weird that the allocations still happen with all saving disabled. |
My current best guess from the profile trace is that the allocations are coming from this stacktrace in the profile:
but this isn't allocating when we're using normal I'll try to investigate more. |
|
I located where the allocations are coming from. It seems like the 16B allocations are coming from calls that look like this, in any of the ODE solvers:
When the problem uses an Going one step deeper, it seems like we are falling back to this definition:
e.g. if I drop a Doing a nice
Is this something that could be fixed with more broadcast overloads? I'll see if I can figure this out and PR but would appreciate suggestions. |
You could probably just define a dispatch of it in JumpProcesses src/extended_jump_array.jl file for @inline function DiffEqBase.ODE_DEFAULT_NORM(u::ExtendedJumpArray, t)
...
end |
If you want to make a JumpProcesses PR with a custom dispatch that would be great (please add some tests too in the appropriate test file!). Thanks! |
I have a system that I would like to simulate, that contains both a core ODE model, combined with discrete stochastic equations driven by VariableRateJump's, in addition to a few ContinuousCallbacks. I've been observing large amounts of allocations even when all saving is turned off (around 1M allocations) despite optimization, and doing weird stuff like reducing the root finder tolerance reduces the allocations can cause the allocation count to drop.
I've reduced this system down to the following minimum working (failing?) example which doesn't save anything and has callbacks and jumps that never actually occur (but should still trigger the callback checks):
I expect that the ContinuousCallback should be useless (because it never crosses zero), and similarly expect the JumpProblem jumps to be useless, since their rate is uniformly zero. However, these four problems show a dramatic difference in the number of allocations (and also differ by an order of magnitude in terms of runtime). In my real system, I'm seeing a solution take ~40 seconds vs ~4 seconds if I try to minimally remove VariableRateJumps (though this makes it a different system than I'd like to actually simulate).
Things I have tried
dtmax
just to get enough iterations to show a difference in the MWE.--track-allocations
andProfile.Allocs.@profile
. The first doesn't show anything particularly interesting in the user code, and the new allocation profiler just returnsProfile.Allocs.UnknownType
as > 98% of the allocations, with a stacktrace that I can't make heads of tails of.DiffEqBase/src/internal_falsi.jl
andOrdinaryDiffEq/src/dense/generic_dense.jl
. I've attached the profile generated by the above MWE here (profile.txt). I looked at the relevant lines, and these functions seem to be non-allocating.Relevant environment
The text was updated successfully, but these errors were encountered: