You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering about the implementation of vectorised solutions - is it intended to solve functions within the same ODE simultaneously? Or does it allow for vectorising solutions of the same ODE with different input parameters, a la heyoka's batch mode?
I'm needing to solve many ODEs with different input parameters very quickly as part of some discrete-time simulations (between 100 and 10000 solutions every time step for ~75000 time steps), but I can only use a single CPU core to solve these (owing to other levels of parallelisation running each of these discrete-time simulations with different inputs). I have already noticed an improvement using Ascent vs odeint which is very nice, but I think vectorising solutions over inputs rather than solving in a for loop (as I am doing) would be faster. Do you know if Ascent can help with this?
Cheers,
Nick
The text was updated successfully, but these errors were encountered:
As long as your set (may be decoupled) or system of ODEs are evaluated with the same dependent variable (denoted as time in Ascent) then you can gain vectorization performance.
Using asc::Param as shown in this example (modular spring damper) allows you to solve disparate ODEs while sharing underlying vectors, a la heyoka's batch mode. The integrators for this approach are currently limited and are in integrators_direct. However, plan to merge integrators and integrators_direct so that the classes will handle both use cases.
However, using asc::Param is only required if you are not describing your ODEs in state space syntax. If you are able to describe your ODEs in state space syntax then you should be able to get the best performance by expanding your ODE set across your input space.
Also, do you have to solve your systems with a fixed step size? And, what numerical integration method are you currently using? Depending on your application you may be able to gain significant performance by looking at different integration methods (especially predictor correctors if you are working with continuous systems).
If you can present a toy problem with what you are trying to do I can transform it into a vectorized example for you.
Hi,
I was wondering about the implementation of vectorised solutions - is it intended to solve functions within the same ODE simultaneously? Or does it allow for vectorising solutions of the same ODE with different input parameters, a la heyoka's batch mode?
I'm needing to solve many ODEs with different input parameters very quickly as part of some discrete-time simulations (between 100 and 10000 solutions every time step for ~75000 time steps), but I can only use a single CPU core to solve these (owing to other levels of parallelisation running each of these discrete-time simulations with different inputs). I have already noticed an improvement using Ascent vs odeint which is very nice, but I think vectorising solutions over inputs rather than solving in a for loop (as I am doing) would be faster. Do you know if Ascent can help with this?
Cheers,
Nick
The text was updated successfully, but these errors were encountered: