Skip to content
This repository has been archived by the owner on May 14, 2020. It is now read-only.

Callback Interface #10

Closed
ChrisRackauckas opened this issue Nov 12, 2016 · 60 comments
Closed

Callback Interface #10

ChrisRackauckas opened this issue Nov 12, 2016 · 60 comments

Comments

@ChrisRackauckas
Copy link
Member

This is to discuss the callback interface separate from the clutter of #5. In there we started discussing a little about coming up with a common callback interface, but didn't get very far.

@mauro3 mauro3 mentioned this issue Nov 12, 2016
4 tasks
@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Nov 12, 2016

For reference, here was @mauro3 's suggestion:

callback_step(t1, t2, u1, u2) = ... # simplest callback, no dense output needed
callback_dense(u_dense::Callable) = ... # callback when there is dense output
callback_free_form(alg::Alg,...) # anything else, dispatch on the Algorithm could be useful

I suggested a DSL for the most advanced features. Let's get a middleground that combines them.

@ChrisRackauckas
Copy link
Member Author

Here's the necessary details on my callbacks. My standard callback looks like:

@inline function ODE_DEFAULT_CALLBACK(alg,f,t,u,k,tprev,uprev,kprev,ts,timeseries,ks,dtprev,dt,
  saveat,cursaveat,saveiter,iter,save_timeseries,timeseries_steps,uEltype,ksEltype,
  dense,kshortsize,issimple_dense,fsal,fsalfirst,cache,calck,T,Ts)
  @ode_savevalues
  reeval_fsal = false
  cursaveat,saveiter,dt,t,T,reeval_fsal
end

essentially, it passes a bunch of stuff to run the saving routine. The saving routine implements the full common interface, so it is quite complex:

@def ode_savevalues begin
  if !isempty(saveat) # Perform saveat
    while cursaveat <= length(saveat) && saveat[cursaveat]<= t
      if saveat[cursaveat]<t # If we already saved at the point, ignore it
        saveiter += 1
        curt = saveat[cursaveat]
        ode_addsteps!(k,tprev,uprev,dt,alg,f)
        Θ = (curt - tprev)/dt
        val = ode_interpolant(Θ,dt,uprev,u,kprev,k,alg)
        copyat_or_push!(ts,saveiter,curt)
        copyat_or_push!(timeseries,saveiter,val)
      end
      cursaveat+=1
    end
  end
  if save_timeseries && iter%timeseries_steps==0
    saveiter += 1
    copyat_or_push!(timeseries,saveiter,u)
    copyat_or_push!(ts,saveiter,t)
    if dense
      copyat_or_push!(ks,saveiter,k)
    end
  end
  if !issimple_dense
    resize!(k,kshortsize)
  end
end

The other thing to note is that the interpolations are decomposed since there's no good way to handle the case where they hold immutable values, other than putting them in a big mutable wrapper type which would slow down the routine pretty considerably. So a lot of those values are passed for a reason.

The saving is done in the callback routine because as noted before, event handling has to hijack the saving routine in order to be both correct and fast. So simply removing it is not an option.

I have tried to think about something like, "if the method signature is callback_step(t1, t2, u1, u2), then save outside of the callback", but then what's the callback useful for? You wouldn't want to (or shouldn't want to) do events like this, even in the form callback_dense(u_dense::Callable) (which because of the decomposed nature of the interpolations, would be tough in the first place both for me, and for ODEInterface/Sundials to conform to) you won't be able to (or shouldn't want to) do events. So what's the short-form useful for? Printing intermediate values? Maybe I'm missing something.

@ChrisRackauckas
Copy link
Member Author

One user brought this up as "logging". So I guess some people are interesting in printing intermediate values. Should the common interface just be enhanced for standard logging procedures?

  • verbose: Print the values at each verbose_steps steps.
  • verbose_steps: How many steps between each print.

If this was included, would there be a need for anything other than the more specific callbacks (for dealing with event handling)?

@ChrisRackauckas
Copy link
Member Author

@tshort might be interested in helping us figure this out.

Maybe the answer could be that callbacks have to be algorithm specific, but event interfaces and logging doesn't have to. Here's what I've been doing. If you check the progress monitoring

https://juliadiffeq.github.io/DiffEqDocs.jl/latest/basics/common_solver_opts.html#Progress-Monitoring-1

you'll see there's a new option: progress_message. The reason for this originally was because Juno added the ability to have a message with the progressbar (when you scroll over it), so I had it show dt, t, and u by default. Now with Juno what we're doing it making this have a fallback to ProgressMeter.jl when Juno isn't present, and display the progress meter and message in the console. Since this can show current times and stuff, is a simple callback callback_step(t1, t2, u1, u2) then not necessary?

If that's not needed, then the only other common thing which callbacks are used for is event handling. But we can make that simply have the options

  • event_f : the event function. Positive function with event at 0.
  • apply_event!: what to do when an event occurs
  • rootfind_event_loc: whether to rootfind the event time
  • interp_points: how many interpolated points to check the event trigger
  • dt_safety : factor to knock down dt after an event occurs. Useful for discontinuities

For the functions, we should take something like:

  • event_f(t,u,du,event_cache) : Is there another piece of information which people use to place events? Note that call overloading can be used to put user caches in here (like pre-allocations). Event cache would be some mutable type <: EventCache which a user can make. Explained below.

  • apply_event!(t,u,du,event_cache,cache=[]) : This is used to mutate u (and du for DAEs?). event_cache is mutated in the event_f. cache is optionally an array with pointers to the caches in the ODE/SDE/DAE solver. This allows one to resize the problem on events (see the cell example from here:

https://juliadiffeq.github.io/DiffEqDocs.jl/latest/features/callback_functions.html#Example-2:-Growing-Cell-Population-1

).

With full-featured monitoring and events as part of the common interface, the vast majority of callback usage is then done in the common interface. However, there will still be the option to pass a callback function callback = ... which can be algorithm specific to do whatever crazy specific thing you want with a given algorithm.

Opinions on this?

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Dec 9, 2016

I did find a problem which needed more than just the events and progress monitoring. However, this would easily be doable if we standardized just the beginning arguments to the callbacks.

SciML/DifferentialEquations.jl#100

what about something like:

callback_free_form(alg::Alg,t,u,du,args...)

where the extra args can be algorithm-specific?

@ChrisRackauckas
Copy link
Member Author

I have found that being able to control the cache in callbacks is very necessary in many cases like hybrid dynamical systems, so I'd suggest it next:

SciML/DifferentialEquations.jl#117

callback_free_form(alg::Alg,t,u,du,cache,args...)

@mauro3
Copy link
Contributor

mauro3 commented Dec 15, 2016

Could you use a ParameterizedFunction-field for a cache instead?

@ChrisRackauckas
Copy link
Member Author

No, let me describe a little what the cache is. Take a look at RK4:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/fixed_timestep_integrators.jl#L115

The cache is found here:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/fixed_timestep_integrators.jl#L128

cache = (u,tmp,k₁,k₂,k₃,k₄,kprev,uprev)

It's a collection which holds the pointers to every temporary array in the algorithm. Now, this is very non-standard so it should be justified why it's there. The reason why it's there is because it allows you to "fully mutate". Two examples. The first example is the ODE which changes size:

https://juliadiffeq.github.io/DiffEqDocs.jl/latest/features/callback_functions.html#Example-2:-Growing-Cell-Population-1

Notice that in order to have the right size on everything, you have to go through all of the cache arrays and resize all of them in addition to the problem, which is why it's necessary here. The other example is a hybrid dynamical system where one makes a type with the continuous variables are indexed and the discontinuous variables just as fields for the type. In order to fully update so that way the discontinuous changes are seen in all parts of the method, you have to mutate every cache variable as well.

So it is very powerful, but maybe it should be a cache object to handle even more cases nicely via dispatch. Then resize!(cache) would be easier than writing the loop. More importantly, we can upgrade it to handle resizing Jacobians. This is more difficult: if you resize a problem how do you resize the cache for the Jacobian? It's not the same as an array, and it's a difficult calculation when the Jacobian is actually for an N-dimensional array, so it would be nice if resize!(cache) handled that as well.

@mauro3
Copy link
Contributor

mauro3 commented Dec 15, 2016

I see. Note that this cache is essentially the state variable in PR49's iteration setup.

@ChrisRackauckas
Copy link
Member Author

Something that came up in this thread was the possibility for multiple events:

SciML/OrdinaryDiffEq.jl#16

The idea is just to use an array of functions for event_f and apply_event!, and apply the events sequentially. The implementations that I have all seem to make this pretty easy, so there's no problem with implementation or efficiency.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Dec 19, 2016

Intermediate Summary

Since I just posted this on Discourse and want to see if I can get @tshort 's opinion (I know you need events in DASKR.jl) and @ronisbr 's opinion, here's a summary of where the proposal is at now.

Events split from callbacks. The event framework is:

  • event_f : the event function. Positive function with event at 0.
  • apply_event!: what to do when an event occurs
  • rootfind_event_loc: whether to rootfind the event time
  • interp_points: how many interpolated points to check the event trigger
  • dt_safety : factor to knock down dt after an event occurs. Useful for discontinuities
  • save_positions: tuple of 2 booleans for saving before/after event application

The function signatures being:

event_f(t,u,du,event_cache,f)
apply_event!(t,u,du,event_cache,f,cache=nothing)

Cached is described in a previous comment:

#10 (comment)

event_cache lets you calculate something in event_f and use it in the apply_event! (is this necessary?). Additionally, these can be set as tuples for the case where you have multiple events, and those events will be applied sequentially (it would be a tuple because a vector of functions isn't strictly typed, while a tuple of functions is).

Callbacks would have a generic part and an algorithm-specific part. The interface would be:

callback(alg::Alg,t,u,du,f,cache,args...)

This is simple enough to handle most cases where events won't apply, and if anything really specific is needed, the solvers can supply a bunch of args... (which would be documented in their own documentation) from which users can inject sufficiently arbitrary code to their hearts content. However, for most purposes users should be using events since it's at a higher level and so, once all of the solvers implement events, they can act pretty uniformly across the ecosystem whereas I'm convinced callbacks never can.

Two points of contention still:

  1. What if the user wants to mutate something like t, u, or du, and those values are immutable? t is usually immutable so this presents a problem. We can just have it as part of the interface that the solvers have to accept mutation of these values, and so if these are immutables, make them wrap it in a size 0 array? Or we can have a separate setup where t is always returned and, if u is immutable, return u and du. This is more similar to how the ODE solvers work themselves, but the caveat here is that it increases user-burden. I'm not quite sure about the performance implications either: using immutables directly in the solver is much faster than size 0 arrays, but callbacks are called only at the end of each iteration and so it may not matter as much.

  2. What about solvers which cannot implement parts of this interface? I know that something like Sundials.jl will have trouble properly implementing cache because you'd actually have to change the internal caches of Sundials to make this work properly (is this even possible?). But I wouldn't want to leave cache out of the interface because it's required for non-standard Julia-defined arrays and changing the problem sizes. These things will probably never be supported by Sundials.jl for this reason, but is it fine to then still have it as part of the common interface? Just make a note that cache is only usable in native Julia solvers, and even then restrict that it's only usable in the native Julia solvers which implement the argument properly? Or does it make sense as a keyword argument? Require that users do kwargs... and parse the kwargs... is not nice though.

  3. If the functions are in there, is there a need for the du? The reason why I ask is because most methods will not have evaluated this derivative yet so it would incur an extra function evaluation (on any non-FSAL algorithm), and the user can explicitly calculate it themselves using f. Normally this won't be a problem because it can then be used in the next interval, but that's not the case if there's a discontinuity. If there's a discontinuity, then that first du calculation would have to be thrown away, so it would incur an extra function cost without a gain if du is not used in the event handling. However, this does hurt the FSAL case where this du is already calculated so it would be free.

@ronisbr
Copy link

ronisbr commented Dec 19, 2016

Hi @ChrisRackauckas,

The way you proposed the events seem to handle every single case I can think of in my satellite simulator.

event_cache lets you calculate something in event_f and use it in the apply_event! (is this necessary?).

Yes, this is necessary for me. There are two events that, if one happened under a set of conditions, the other must not be executed. Hence, in event_f I can check if these conditions are met and send the information to apply_event! so that the second is not executed. As of now, I am using global variables to do that.

Unfortunately, I do not have the right expertise to comment about your 2 points. But I have one question about the callback_free_form: If something happens inside this callback and we need to execute a new iteration of the solver using a very very small dt, then the user will need to do all the work like saving variables, calling the solver, etc. right?

@ChrisRackauckas
Copy link
Member Author

If something happens inside this callback and we need to execute a new iteration of the solver using a very very small dt, then the user will need to do all the work like saving variables, calling the solver, etc. right?

If you have an event with dt_safety being small, that will force the dt after the event to be very small. For callbacks, I can't think of a good general way to handle it because callbacks are at such a low level.

@ronisbr
Copy link

ronisbr commented Dec 19, 2016

If you have an event with dt_safety being small, that will force the dt after the event to be very small. For callbacks, I can't think of a good general way to handle it because callbacks are at such a low level.

Oh I see! If I need something like this, then I just can use events and everything will be fine 👍

@ChrisRackauckas
Copy link
Member Author

Oh I see! If I need something like this, then I just can use events and everything will be fine 👍

Yes, the goal is for most things to be covered by events since it's a high-level API and so it's easy to standardize among all of the different solvers. Callbacks are injecting almost arbitrary code, so you'll have to handle the different ways the different solvers interpolate, save, etc. Thus I am hoping the events API is broad enough to cover almost all use cases, with callbacks being the fallback if you have very specific needs.

Actually, to make it slightly more generic, I propose one more argument to go with the events:

save_positions (needs a better name?). A tuple for saving before and after event application. Default is (true,true) which means you do the sequence:

  • Save
  • Apply Event
  • Save

which is standard (two saves are necessary to handle a discontinuity). However, you may want to use an event to just make a change after each iteration. In which case, you can use the trivial event event_f(args...) = true with (false,true), and then apply_event!(...) will do a change at the end of each iteration, and if that timestep is saved, the change will be saved as well. One use for this is in uncertainty quantification where a random number is added at the end of each timestep.

Are both booleans needed, or just the one for the pre-event save?

The confusing thing though is that we want to override this for chained events, because we never want two consecutive saves:

  • Save
  • Apply Event 1
  • Save
  • Save
  • Apply Event 2
  • Save

It would have to be up to the solvers to catch the double saving.

@jtravs
Copy link

jtravs commented Dec 19, 2016

It would be useful to be able to modify f in the event handlers (or is this already possible?) I need to be able to adjust the system being solved at events too.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Dec 19, 2016

It would be useful to be able to modify f in the event handlers (or is this already possible?) I need to be able to adjust the system being solved at events too.

Yes, this indeed sounds very useful. If we extended the signatures once more:

event_f(t,u,du,event_cache,funcs...)
apply_event!(t,u,du,event_cache,cache,funcs...)

(note I use funcs.... because this catches all of the available functions. An ODE would only have one function here so you could replace that with f, but an SDE is defined by two functions, so you'd need f,g here, etc.). This would allow you to affect the fields of a function. This means you could adjust the parameters in a ParameterizedFunction, or build a "multi-f" like:

# Declare the type to overload
immutable MyFunction 
  state::Int
end
MyFunction(0)

# Make the overload
function (p::MyFunction)(t,u,du)
  if state == 0
     # one set of ODEs
  elseif state == 1
     # another set of ODEs
   else
     # a third set of ODEs
   end
end

Then in apply_event!(t,u,du,event_cache,cache,f) (for an ODE) you could do f.state = 2 to change the state (or parameters, etc.).

Does that sound good? Or is there a case this can't handle?

If the functions are in there, is there a need for the du? The reason why I ask is because most methods will not have evaluated this derivative yet so it would incur an extra function evaluation (on any non-FSAL algorithm), and the user can explicitly calculate it themselves using f. Normally this won't be a problem because it can then be used in the next interval, but that's not the case if there's a discontinuity. If there's a discontinuity, then that first du calculation would have to be thrown away, so it would incur an extra function cost without a gain if du is not used in the event handling. However, this does hurt the FSAL case where this du is already calculated so it would be free.

@tshort
Copy link

tshort commented Dec 19, 2016 via email

@tshort
Copy link

tshort commented Dec 20, 2016 via email

@ChrisRackauckas
Copy link
Member Author

A "pre-draft" would be OrdinaryDiffEq.jl's callbacks/events, but I will update it to this proposal soon (lets say by the end of the week?). It's missing a few things and is in a macro form (so it's hard to map in its current form over to other packages), but it lets you generally explore what's possible with the framework.

That said, I'll update OrdinaryDiffEq.jl, and I am solving the remaining issues as follows:

  1. Immutables will be passed by reference to apply_event! so that way t and u (when immutable) will be able to be mutated and saved. This means they have to be accessed via t[]. Benchmarks show that it's easy to use it this way without a problem, and this makes it so that way we don't have to specify returns.

https://discourse.julialang.org/t/way-to-have-a-function-mutate-an-immutable-without-much-performance-loss/1050

  1. du should be passed in when possible because it's definitely more performant to use that, since the solvers can provide it by interpolation. Every explicit solver can do this without a performance hit (though interpolations of derivatives need to be implemented!), and implicit solvers can easily get this as well (since they can be written as DAE solvers, so this would show up anyways). But if they can't, that mixes with...

  2. If solvers cannot implement part of the interface, they just pass nothing. I suspect some will have to do pass cache=nothing, and others, at least early on, will have du=nothing. This will throw an error if a user tries to use it. This is nicer than cache=[] because then for c in cache will silently do nothing, and I think it's better to error here. The nicest thing would be for an error message if you use the value, like "This solver doesn't implement usage of cache", but I am not sure how to do that so getting an error instead is a happy medium.

I'll wait and see if @BenLauwens has any comments and then get this up and running.

@ronisbr
Copy link

ronisbr commented Dec 21, 2016

Hi @ChrisRackauckas

I did not understand something (maybe it is already explained, but I was not able to infer). For another application, I am using a simulator totally written in julia that sends TCP packages to be shown in a GUI coded in C++/Qt. As of now, I am using my own integrator. But I will be very happy if I can use this package. However I have a very important requirement: I need to be able to change the tstops outside the event/callback and/or change the integration interval if I am using a fixed step algorithm.

It works like this:

  1. The user starts the simulation that integrates the equation with dt = 20s and sends TCP packages every integration step.
  2. If the user wants to decrease the simulation execution time, then it can ask for the core to send TCP packages every n steps, where n \in [1,2,3,4,5,...].
  3. The user can also ask to the core to change the internal integration step to, for example, dt = 40s.

In order to use this package, I think to use an event to send the TCP packages. However, the tstops must be changed dynamically according to the user preferences (the commands are received by julia in another thread). Do you think this is feasible under this new implementation of callbacks and events?

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Dec 21, 2016

Do you think this is feasible under this new implementation of callbacks and events?

You beat me to it by just a few seconds! I actually just had this thought, albeit for very different reasons. My hybrid discrete+continuous simulations of biological organisms needs this in order to properly do Gillespie SSA / tau-leaping. Doing this in the current OrdinaryDiffEq.jl callbacks can be done by changing the variable Ts, this is actually how the (undocumented) @ode_terminate for terminating an ODE upon an event works.

So this new draft is missing two important things:

  1. Using events to terminate ODEs
  2. Mutating tstops

Let me explain a little bit about current implementations to set the grounds. The way that tstops is actually implemented in every solver which supports it is something like this:

  @inbounds for T in tstops # Actually, internally I name it Ts, but it's essentially tstops
    while t < T
       # inner loop
     end
   end

(Interop packages like Sundials have issues, moreso with how to pass this information as tstops instead of saveat kind of behavior. This can be fixed in the future, but it means interop packages will be late for having proper tstops behavior)

What this means is that to adding new stopping locations to tstops can be done by just mutating tstops, EXCEPT for the current interval. This means that if you have tstops = [0;20;40;60;80] and you are between 0-60, push!(tstops[end],70) will work as you think it would, but if you're past 60, then it won't actually make a difference because what's actually used at that point is T=80, and so you would need to set T=70 instead. This also makes termination a little nasty:

@def ode_terminate begin
  T = t
  while length(tstops)>1
    pop!(tstops)
  end
end

is how I made it work.


So any ideas on a better way to handle this? If you test:

tstops = [20;40]
for i in eachindex(tstops)
  if tstops[i] == 40
    push!(tstops,60)
  end
  @show i
end

then this can't handle the growing tstops. So the naive plan of:

  @inbounds for i in eachindex(tstops) # Actually, internally I name it Ts, but it's essentially tstops
    while t < tstops[i]
       # inner loop
     end
   end

doesn't work. The other thing we want to handle is adding values to tstops which are out of order. If you setup your ODE on (0.0,Inf) and then want to add discrete times, then you start with tstops == [0.0,Inf] and so push!(tstops,20.0) makes tstops ==[0.0,Inf,20.0] and so it will never actually stop at 20. Maybe we should use a special data structure which stays sorted here, but then we would still need some way still to make sure that the amount of iterations through tstops automatically adjusts when the size of the iterator changes.

Edit

Here is a minimal example we want to solve. We want something that's kind of like this:

tstops = [20;40]
for i in eachindex(tstops)
  if tstops[i] == 20
    push!(tstops,70)
  end
  if tstops[i] == 40
    push!(tstops,60)
  end
  if tstops[i] == 70
     tstops[end] = 100
  end
  @show tstops[i]
end

but actually shows 20, 40, 60, 70, 100 (in that order). If we can get this to work, then the rest follows.

Edit 2

I also put it to SO to see what that produces


Edit 3:

Even with all of this side business of a more robust implementation for tstops mutation, what's a good way to do termination? And what's a good API for it? Is:

apply_event!(t,u,du,event_cache,cache,funcs...)
  @ode_terminate t tstops
end

for a whatever way we find works a good implementation? Or is there a good way to do this without resorting to a macro within the event, say some kind of boolean switch the user sets?

@ChrisRackauckas
Copy link
Member Author

Hey guys, I am very happy to announce a solution here! Here's what I got. It solves all of the standing issues, and I think it's safe to say it's "infinitely customizable", and, this is what took a long time, there are no performance regressions to implement it. Here's the gist that will be fleshed out more in documentation.

Everything is folded into a type: integrator. The integrator at least has the fields:

Integrator must have:

  • t
  • u
  • userdata
  • opts
  • alg
  • f
  • sol

opts is the common solver options in a type form (with the standard names), sol is the solution it's building, userdata is a user-supplied type for caching purposes (there is a case where call overloaded types will not work for this). An integrator can also have:

  • tprev
  • uprev
  • a current interpolation function

Essentially, the integrator type holds the current state and is building its sol type. The interpolation function is current: integrator(t) uses the data from the current interval to either interpolate or extrapolate values. This is great for intermediate/interactive plotting, has no memory overhead in standard methods, is already implemented in most C/Fortran libraries (so we just have to bind it), will make it easy to build good implicit methods (since to do that, you need to extrapolate initial conditions), and if you need the one with history, just use integrator.sol(t). Lastly, the integrator interface can be extended declaratively by using dispatches on functions. The standard parts of the interface which I have set to implement are here:

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/integrator_interface.jl#L1

Most of what these do should be understood by the name. Once you have that, two things happen. First of all, putting an iterator interface on it is trivial (see the following for a sneak peak:

SciML/OrdinaryDiffEq.jl#19

). This gives a first form of a "callback" by allowing you to solve step by step, and you can see from the tests some sample usage: https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/test/iterator_tests.jl

The interactivity is cool and fun to play with, but having everything in this "god-type" is super powerful. It means that, the rest of the interface can be built using dispatch on top of this type. This means that the same tools for writing the internal methods are now the same tools in the iterators and then, by extension, the same tools in the callbacks. (It also means that it's trivial to build a "multi-algorithm" which switches between stiff and nonstiff solvers.... 👍 you know where my next weekend is going to be)

So what are the callbacks? Well, using the iterator interface directly can be bad in some cases. For example, FSAL algorithms need to re-evaluate the beginning function only if u is modified, and it would be a performance hazard to always assume it is modified. In addition, for doing event handling, you need to rootfind to events. While this is possible by calling u_modified!(integrator) to tell it to re-evaluate FSAL (or reinstantiate the cache for something like Sundials), and then writing your own rootfinding routine, callbacks are a simple way to always "get this right".

By making the callbacks very free, there's no reason to then distinguish them from event handling. Instead what we have are conditional callbacks. They are held in a type which is defined here:

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/callbacks.jl

It goes like this:

  • condition(t,u,integrator) designates an event at 0 (and should be continuous for rootfinding)
  • affect!(integrator) is the affect one does to the integrator
  • rootfind is a boolean for whether to rootfind the event location
  • interp_points is the number of interpolation points to check for events. Think of cos(x)+0.8 checking for if it's zero.
  • save_positions is where to save. Tuple for before and after.

Here's how you map previous ideas to this. A "callback" is simply condition(t,u,integrator)=0 where you then write the affect!(integrator) funciton, with rootfind=false and interp_points=0. You can choose whether to save before your callback or after your callback with this setup.

Event handling is a continuous condition which does affect!, normally has rootfind=true with some interp_points, and save_positions=(true,true) (you need to save once before and once after to handle the discontinuity).

So yay, those are now the same thing, and they don't need to be split because everything about the method can be changed in affect! since the integrator knows all. But we get even more added bonuses. First of all, since the interface is all on using the integrator, this callback interface has the same API as the iterator interface, meaning that it will be easy to just know both. In addition, these callbacks are a type, which means you can build them and compose them. I plan on making DiffEqCallbacks.jl which has a bunch of standard callbacks.

If you want to give this all a try, this issue should be really simple to implement: SciML/DifferentialEquations.jl#118 . condition(t,u,integrator)=false, interp_points=0, rootfind=false, save_positions=(true,false) (since saving turns off when callbacks are active, small detail (since otherwise you'd save just before/after events). Then what you do for affect! is make it a call overloaded type which holds the maximum value that has been encountered. For example:

type AutoAbstol{T}
  cur_max::{T}
end
# Now make `affect!` for this:
function (p::AutoAbstol)(integrator)
  p.curmax = maximum(p.curmax,integrator.u)
  integrator.opts.abstol = p.cur_max * integrator.opts.reltol
end

Now you build the type from these, and that type could be passed into any solver.

Lastly, these can then be composed. You're allowed to specify, in order, multiple conditional callbacks. So:

solve(prob,Tsit5(),callback=(c1,c2))

will first apply c1, and then c2, then hit the iterator part of the interface (if you did it in the iterator form), then go up to the top. This means that you can use the library for tolerance controls along with your own callbacks, and so this makes the ecosystem "infinitely extendable".

Given all of this together, here are some things which are easy to do using the callbacks:

  • DDE solvers. To turn any method into a DDE solver, you need to be able to get previous values and propagate discontinuities. integrator(t) and integrator.sol(t) let you interpolate back, meaning that you can do the step-wise history initialization, and then to force the solver to hit the discontinuities, you either use push!(integrator.opts.tstops,t) the t for the new discontinuities (this is for restricted DDEs, but is faster), or rootfind discontinuity locations. Either way, this means that every method which implements this interface (all of OrdinaryDiffEq.jl) is, after DelayDiffEq.jl is made, also a DDE solver.

  • Uncertainty quantification. This is actually really trivial to add to any algorithm now. Just make a Callback type which does what's outlined here: Develop and wrap a PODES Implementation DifferentialEquations.jl#100 (comment) .

  • Discrete stochastic simulations. Want to do a Gillespie method? All you need to do that is know the rates, and push the values for changes into tstops. Because of how useful these events are, I am going to make a Discrete() algorithm which is order 0 and is just a pass-through.

  • Composite algorithms. Already mentioned: now you can easily affect anything in the integrator, so why not make it switch between different types of methods? This gives an interesting way to implement some methods like BDF methods as well.

And I can keep going. Loads of credit go to @mauro3 since a lot of this stems from his overarching design ideas, which I thought were wrong at first, only to be proven wrong (I thought I couldn't get enough performance out of it. Turns out there are ways). Thanks!

@ChrisRackauckas
Copy link
Member Author

Hitting a few other points:

  • Termination isn't an issue. Since it's a function termiante!(integrator), each package just writes a dispatch, and this can uniformly act throughout the ecosystem now.
  • For tstops and saveat, just change them into a DataStructures.jl BinaryHeap, with the ordering matching the time direction. Then push!, top, etc. is always in the right order, so tstops then "just works"
  • get_du(integrator) is a method, so it can be supplied by libraries which support it (via dispatch again), or an error is thrown.
  • Immutables are not a problem since integrator.u = ... lets you change it.
  • Again, dispatches solve the other problems. caches(integrator) for an iterator over the cache objects, resize! to resize all of the cache, etc. By using dispatches, it will work uniformly across the ecosystem, or an appropriate error is thrown to say that a package can't do a specific option.

So in the end, I think this is a very easy to understand, highly extendable, and easy to document interface. Let me know if you think there are any changes that should be done. I am not set on things like names: they are very mutable. These things are on the master branches of OrdinaryDiffEq.jl and DiffEqBase.jl, so feel free to give suggestions. I think I'd like to tag DiffEqBase.jl by the end of the week / during the weekend, and then OrdinaryDiffEq.jl soon after, to get this all up and running (it works really well, like, not having issues at all!). If there's no complaints, then I'll close this issue when that's done. Then I'll get to implementing this onto Sundials.jl probably next month (or a good weekend).

@ChrisRackauckas
Copy link
Member Author

You have f in there, but some problems will need more than that. Maybe, just include the problem as an argument.

Well, the story here gets really tricky. In many cases you may not be solving the function given to you by the problem type. You can't mutate the problem type's function because it's strictly typed. Some examples of this are:

  1. Sundials is given a problem on Array, but can only naively work on Vector. A closure with views (reshape) is an easy fix to this problem.
  2. Promotions will make new functions.
  3. The differential delay equation solver I'm cooking up requires it. I'll blog post the design soon.

I think the way to handle this is through docs. I actually already put places for this stuff: http://docs.juliadiffeq.org/latest/types/dae_types.html#Special-Solution-Fields-1 . Basically, the intro on solutions will say "here's the stuff every solution/integrator has, but there some specifics related to the problem domain on the following pages". This way we add du for DAEs, g for SDEs, the second function for IMEX problems, etc.

@ChrisRackauckas
Copy link
Member Author

I'll likely let my DASKR PR sit until you've taken a pass at Sundials, then I'll make my PR more conforming.

Note that I will probably be putting my focus on StochasticDiffEq.jl first (getting it to integrators and callbacks/events), finish up DelayDiffEq.jl, release Monte Carlo and differentiation tools libraries, and then head over to Sundials. So it might take a bit more than a week to get to that. Sorry about the wait. But let me know if that interface compromise above works. I think if we're solid on this then any future change is pretty simple (and mostly internal).

@tshort
Copy link

tshort commented Jan 9, 2017

I'm good with that, @ChrisRackauckas.

@ChrisRackauckas
Copy link
Member Author

Alright. So it sounds like there's no other objections to the general idea and the first releases for this are going out soon. So I am closing this. Further discussions should be more focused on individual packages/PRs unless something comes up.

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Hi @ChrisRackauckas,

I am having problems with this new interface. Can you please take a look at the following code:

https://gist.github.com/ronisbr/a424303a1818e931cdb1a4688b0de1da

I think I follow everything right as you mentioned in the new documentation, but the event is not being applied correctly. Here is the figure I am getting:

figure_1

After t = 3 the two states must be different.

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Btw, this is the same scenario I described in SciML/DifferentialEquations.jl#117

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Hi @ChrisRackauckas,

I manage to make this code work only if I remove tstops and use t - 3.0 at the condition. Do you have any idea why the old solution using tstops does not work? It turns out that the affect! function was not being called.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Jan 10, 2017

Oh yes, this is definitely a regression. It's due to implementation issues, maybe you'll have an idea for how to handle this or maybe the spec needs to be slightly modified.

The issue is that, with the new callbacks, to make them more robust I added the possibility to "ignore values near zero". So there's a tolerance when checking the callback condition so that way it's not re-triggered right after a previous event. The relevant line is here:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/callbacks.jl#L15

Basically:

# Check for events through sign changes
  if isapprox(previous_condition,0,rtol=callback.reltol,atol=callback.abstol)
    prev_sign = 0
  else
    prev_sign = sign(previous_condition)
  end

# Later on...
# Check for sign change for event
if  prev_sign*sign(callback.condition(integrator.tprev+integrator.dt,integrator.u,integrator))<0
  # Oops, never triggered because `prev_sign` is zero!
end

If you're too close to zero, then it drops that from being a possibility. While this is good for rootfinding (makes it a lot more robust), the moment you posted your result I could immediately realize that it would be dropping out every event you actually wanted!

The fix is to make a callback with zero tolerances, no rootfinding, and no interpolation points. However, that might be a little awkward for the user interface (basically exposing implementation issues), so I think it would be good to expose a DiscreteCallback. It could just create a callback like that (zero tolerances, no rootfinding, etc.) and it should work, but would it be better to have it as its own type? If it is its own type, a reasonable spec would be:

immutable DiscreteCallback{F1,F2,F3,T,T2} <: DECallback
  condition::F1
  affect!::F2
  save_positions::Tuple{Bool,Bool}
end

DiscreteCallback(condition,affect!,save_positions) = DiscreteCallback(condition,affect!,save_positions)

Now before I was thinking of keeping callbacks the same type, but in the actual implementation it's impossible to avoid dynamic dispatch when checking/applying the type because every single one has different functions and thus different callback parameters. In the Gitter channel we calculated the dynamic dispatch costs to be around 90 ns for this case (very low since they are very similar in memory layout, and the layout is known at compile-time because the callbacks are stored as tuples). So I don't think that making a DiscreteCallback would introduce a cost at all, and it would make it easier to use dispatch to handle the two different cases (they seem to need much different handling).

Callback would then always have condition be a continuous function with events at 0. DiscreteCallback would always have a condition which is a boolean function with events at false (originally it was false because that's the same as zero, but should it be true to make more sense?). This would create little bit more separation between the two types of behaviors too, something that @tshort was requesting.

Is this a good idea?

(Also, should the user specify them slightly differently? callback = ... and discrete_callback = ...? Maybe it's an implementation issue, but the first thing I'll have to do is split the callback tuple into these two in order to handle them)

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Hi@ChrisRackauckas ,

Thanks for the prompt answer. Well, maybe I did not understood everything, but since I pass to callback function if I want or not rootfind, why we cannot use this variable to change how we are handling callbacks?

@ChrisRackauckas
Copy link
Member Author

Yes, it could just be an implementation issue, and just checking for zero if rootfind==false would fix this, but I was wondering if it would make sense to users to have them kept separate a little (is it a little hacky to have to understand that a discrete callback has no rootfinding with no interp_points?)

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

No, I don't think so. Actually, I think it is better (if it is not very difficult to implement). With the new discrete callback, then the condition function should return a boolean (true if the event happened). It will be much more logical for my use case since the condition will be (t in tstops) rather than !(t in tstops).

@ChrisRackauckas
Copy link
Member Author

Alright. I think that @tshort would agree it's good to separate them because Sundials and DASKR (and probably all interop packages) will only want to see the Callback, and the DiscreteCallback would have to be handled in Julia. So it should make most implementations much easier.

The last thing is then what I mentioned above.

(Also, should the user specify them slightly differently? callback = ... and discrete_callback = ...? Maybe it's an implementation issue, but the first thing I'll have to do is split the callback tuple into these two in order to handle them)

What do you think here? If every solver is going to need to parse and split the tuple, it at least makes sense implementation-wise to split it, but I wouldn't want to unnecessarily put burden on the user because of implementation issues. Maybe what really needs to be done is that, instead of a tuple, one uses a CallbackSet,

# instead of 
callback = (cb1,cb2,cb3)
# Put it in a type
callback = CallbackSet(cb1,cb2,cb3)

and that constructor can store the two different tuples of callbacks to make it easier for the solvers to grab the two different types?

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Well, here I am not sure. I think it is OK to make things easier for the user, but it maybe lead to problems very difficult to debug. Like, if the user confuse one discrete callback with a continuous one?

@ChrisRackauckas
Copy link
Member Author

The CallbackSet constructor can handle "the separation" itself, so users wouldn't have to separate them. They would just have to know to create a Callback or a DiscreteCallback for what they want.

@ronisbr
Copy link

ronisbr commented Jan 10, 2017

Ah ok! Then it is fine!

@ChrisRackauckas
Copy link
Member Author

Hey everyone,

So I made a few changes to the spec to accomodate the latest feedback. Instead
of just a single Callback, we have two: ContinuousCallback and DiscreteCallback.
ContinuousCallback is what we had before: event when there's a root, can have
different events for upcrossings and downcrossings, and can optionally send
information to the solver for more/less robust checking (which the solvers
may ignore, and which we should document for the individual cases).

The DiscreteCallback has a condition function which declares an event when
the condition is true, and applies the affect.

Now for a few details:

  1. CallbackSet sets are created recursively:

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/callbacks.jl#L34

Because of this there generation is actually type-stable/inferrable.

  1. CallbackSets add the callbacks to the tuples in the order they come in.

  2. You can mix any of the different types of callbacks in the CallbackSet
    constructor. For example, you can do CallbackSet(ccb,dcb,cbs) to put together
    a new CallbackSet from a ContinuousCallback, a DiscreteCallback, and
    a CallbackSet. The callbacks from cbs are appled to the end of the tuples
    which start with ccb and dcb respectively.

  3. Ordering of ContinuousCallbacks don't matter. This is because event handling
    goes by which one occurs first. Given that it's continuous, the probability of
    simultanious events is zero... but the codes will still probably calculate something.
    I have the tiebreaker as the first one in the list, but this really doesn't matter.

  4. Ordering for DiscreteCallbacks can matter. They are applied in the order they
    are in the tuple. This will really only matter if the application of one discrete
    callback makes it so that way the next's condition is false.

  5. There are some nice basic constructors for CallbackSet, so the way I am
    writing the solvers is to take whatever the user gives to callback= and calling
    CallbackSet on it. It will make empty tuples for nothing, it'll properly make
    one empty tuple and a non-empty tuple if you just pass it a single callback, etc.
    This will make the callback argument work really nicely.

  6. In OrdinaryDiffEq.jl, I found a way to do all of the callback computations
    using similar recusion such that there's only one dynamic dispatch. There has
    to be at least one because you always have to choose the continuous callback to
    apply (if one is applied) and that cannot be computed at compile-time in general
    (obviously). Here it is:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L163

That said, it is self-contained (non-leaky, i.e. it won't affect
type inference) and passed through a function barrier which the Gitter channel
had documented as costing 90ns. So we're good here. All of the discrete callback
applications are not dynamic dispatch through some more recursive handling:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/callbacks.jl#L112

This is something that can almost directly be mapped over to other packages.

I think that's all of it.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Jan 10, 2017

As for your code @ronisbr, here's a version for the updated callback interface:

https://gist.github.com/ChrisRackauckas/48f782cf6671ba5eb118baca4cc667c4

newplot 7 1

I also showed chaining of two different callbacks using a CallbackSet in there: one to push it up and then one to push it down. Note that in order to properly apply the affect, I had to:

function affect!(integrator)
    integrator.u.f1 = 1.5
    integrator.cache.tmp.f1 = 1.5
end

Notice the cache variable. This is due to the same reasons as before. That said, given the integrator specs, there will be a cache iterator on it, so the general answer which will work with any OrdinaryDiffEq algorithm will be:

function affect!(integrator)
  for c in cache(integrator)
      c.f1 = 1.5
   end
end

And of course, any native solver package could give you an iterator over its cached variables to do the same (but this is something only Julia native packages could hope to support).

That said... I haven't gotten around to implementing all of the cache iterators yet because it will be tedious. I'll get to it (it means the size changing example is also broken on master right now too).

@ronisbr
Copy link

ronisbr commented Jan 11, 2017

Hi @ChrisRackauckas !

Wow! You are fast! Amazing work, congratulations!

I will test this tonight and, if I could make everything work, then tomorrow I will update my simulator and compare with the old version. I will let you know the results.

@ronisbr
Copy link

ronisbr commented Jan 12, 2017

Hi @ChrisRackauckas,

I am having a problem, I tried to run your example but I got this error:

ERROR: LoadError: MethodError: no method matching start(::OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}})
Closest candidates are:
  start(::SimpleVector) at essentials.jl:170
  start(::Base.MethodList) at reflection.jl:258
  start(::IntSet) at intset.jl:184
  ...
 in (::##3#4)(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.Tsit5,SimType{Float64},Float64,Float64,Float64,Array{SimType{Float64},1},DiffEqBase.ODESolution{Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},DiffEqBase.ODEProblem{SimType{Float64},Float64,true,#f},OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.InterpolationData{#f,Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}}}},SimType{Float64},#f,Void,OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float64,Float64,OrdinaryDiffEq.#ODE_DEFAULT_NORM,DiffEqBase.CallbackSet{Tuple{},Tuple{DiffEqBase.DiscreteCallback{##1#2,##3#4}}},OrdinaryDiffEq.#ODE_DEFAULT_ISOUTOFDOMAIN,OrdinaryDiffEq.#ODE_DEFAULT_PROG_MESSAGE,DataStructures.BinaryHeap{Float64,DataStructures.LessThan},Void}}) at /home/ronan.arraes/tmp/teste_new_callbacks_01.jl:67
 in apply_discrete_callback!(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.Tsit5,SimType{Float64},Float64,Float64,Float64,Array{SimType{Float64},1},DiffEqBase.ODESolution{Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},DiffEqBase.ODEProblem{SimType{Float64},Float64,true,#f},OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.InterpolationData{#f,Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}}}},SimType{Float64},#f,Void,OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float64,Float64,OrdinaryDiffEq.#ODE_DEFAULT_NORM,DiffEqBase.CallbackSet{Tuple{},Tuple{DiffEqBase.DiscreteCallback{##1#2,##3#4}}},OrdinaryDiffEq.#ODE_DEFAULT_ISOUTOFDOMAIN,OrdinaryDiffEq.#ODE_DEFAULT_PROG_MESSAGE,DataStructures.BinaryHeap{Float64,DataStructures.LessThan},Void}}, ::DiffEqBase.DiscreteCallback{##1#2,##3#4}) at /home/ronan.arraes/.julia/v0.5/OrdinaryDiffEq/src/callbacks.jl:120
 in handle_callbacks! at /home/ronan.arraes/.julia/v0.5/OrdinaryDiffEq/src/integrators/integrator_utils.jl:168 [inlined]
 in loopfooter! at /home/ronan.arraes/.julia/v0.5/OrdinaryDiffEq/src/integrators/integrator_utils.jl:138 [inlined]
 in solve!(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.Tsit5,SimType{Float64},Float64,Float64,Float64,Array{SimType{Float64},1},DiffEqBase.ODESolution{Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},DiffEqBase.ODEProblem{SimType{Float64},Float64,true,#f},OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.InterpolationData{#f,Array{SimType{Float64},1},Array{Float64,1},Array{Array{SimType{Float64},1},1},OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}}}},SimType{Float64},#f,Void,OrdinaryDiffEq.Tsit5Cache{SimType{Float64},SimType{Float64},SimType{Float64},OrdinaryDiffEq.Tsit5ConstantCache{Float64}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float64,Float64,OrdinaryDiffEq.#ODE_DEFAULT_NORM,DiffEqBase.CallbackSet{Tuple{},Tuple{DiffEqBase.DiscreteCallback{##1#2,##3#4}}},OrdinaryDiffEq.#ODE_DEFAULT_ISOUTOFDOMAIN,OrdinaryDiffEq.#ODE_DEFAULT_PROG_MESSAGE,DataStructures.BinaryHeap{Float64,DataStructures.LessThan},Void}}) at /home/ronan.arraes/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:221
 in #solve#47(::Array{Any,1}, ::Function, ::DiffEqBase.ODEProblem{SimType{Float64},Float64,true,#f}, ::OrdinaryDiffEq.Tsit5, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}) at /home/ronan.arraes/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:7
 in (::DiffEqBase.#kw##solve)(::Array{Any,1}, ::DiffEqBase.#solve, ::DiffEqBase.ODEProblem{SimType{Float64},Float64,true,#f}, ::OrdinaryDiffEq.Tsit5, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}) at ./<missing>:0 (repeats 2 times)
 in include_from_node1(::String) at ./loading.jl:488
while loading /home/ronan.arraes/tmp/teste_new_callbacks_01.jl, in expression starting on line 80

Both DiffEqBase and OrdinaryDiffEq are at master. I have no idea what is happening.

@ronisbr
Copy link

ronisbr commented Jan 12, 2017

Nevermind, I deleted .julia and started again. Now it is working. I only have one more comment. I could not find the cache function you comment. I see a cache_iter at DiffEqBase but it seems it is not implemented yet for OrdinaryDiffEq. Am I doing something wrong?

@ChrisRackauckas
Copy link
Member Author

That said... I haven't gotten around to implementing all of the cache iterators yet because it will be tedious. I'll get to it (it means the size changing example is also broken on master right now too).

I meant cache_iter. I just haven't gotten to it yet, so currently you'd have to look at the cache for the algorithm and use that knowledge to built it.

@ronisbr
Copy link

ronisbr commented Jan 12, 2017

Hi @ChrisRackauckas,

In this new version, is it possible to let the solver define the algorithm? If I remove the algorithm from the solve command, then it gives an error.

@ChrisRackauckas
Copy link
Member Author

That will need the DifferentialEquations.jl tag to go through, which is last.

@ronisbr
Copy link

ronisbr commented Jan 12, 2017

@ChrisRackauckas just to let you know that I ported my simulator to the new version of OrdinaryDiffEq and the new callback interface. Everything is exactly the same, no performance drop, no change at the results. And now I don't need a global variable to pass the time to the affect!. Great work! :)

When we can select the absolute tolerance per state, I think this simulator will beat simulink execution time, which is fantastic!

@ChrisRackauckas
Copy link
Member Author

Good to hear! Sounds like the callbacks are working well then. I'll close this and the rest is package-specific implementation issues.

@ChrisRackauckas
Copy link
Member Author

@ronisbr , I expanded the cache iterator parts. There one for just the u-like values, one for just the rate-like values, and one for all cache variables. It'll take some documentation and examples to really explain, but the idea is that in many cases you only need a certain part of the cache. In your example, you only needed to update the u-like values, since those are the ones which have and use the discrete variable in your function. Thus on OrdinaryDiffEq master you can now do:

function affect!(integrator)
  for c in u_cache(integrator)
      c.f1 = 1.5
   end
end

and that should work with all functions for which it's implemented (everything but Rosenbrock and the implicit methods), and will gain support as time goes on. Again, this is on master and may not be released for a bit.

@ronisbr
Copy link

ronisbr commented Jan 21, 2017

Excellent @ChrisRackauckas! Good work :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants