Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVODES #262

Closed
wds15 opened this issue Mar 17, 2016 · 37 comments
Closed

CVODES #262

wds15 opened this issue Mar 17, 2016 · 37 comments
Milestone

Comments

@wds15
Copy link
Contributor

wds15 commented Mar 17, 2016

Summary:

replace cvode integrator with codes. So remove cvode and replace functionality by cvodes.

Description:

The current cvode integrator uses a coupled ode system in order to obtain sensitivities of an ODE system. This is handled in a fully integrated way by cvodes. Benefits of using cvodes:

  • always faster than cvode+coupled
  • greater robustness (in particular for large ODE systems)
  • less code to maintain as more work is done from library
  • Cvodes supports adjoint sensitivity analysis which can be included at a later stage; this allows to calculate directly the sensitivities of a cost function wrt to parameters which is useful if we only need the sensitivities of a single state which enters the likelihood.

Reproducible Steps:

Bad performance with big ODE systems. For example, the PERC model. Other big models are equally suitable to show slow performance of the cvode/coupled system.

Current Output:

Some ODE systems make the cvode/coupled integrator very slow.

Expected Output:

Large ODE systems should not slow down the integrator.

Additional Information:

This is a large patch and we may just use this as a basis to split into smaller bits and pieces. This patch introduces also the ode_model object which can be used to enable analytic Jacobians for ODE systems via template specialization. Moreover, the decoupling operation is split out of the coupled_ode system object. This patch can form the basis for a nice refactoring of the coupled_ode_system.

Current Version:

None. Only the current develop version of math

@bob-carpenter
Copy link
Contributor

I think Michael's going to have to chime in on the math and algorithms.

Is that PERC model available somewhere we can use to benchmark? Does anybody know if we can distribute the data? Otherwise, maybe Michael has some ideas for performance tests?

I'm more than happy to help out with the nitpicky code integration details.

@wds15
Copy link
Contributor Author

wds15 commented Mar 18, 2016

Hi!

Thanks Bob for supporting!

I am asking Frederic about the PERC model right now, will let you know once I have feedback from him. I can provide other large (published) models which show the same problems when being solved with cvode+coupled instead of CVODES.

In case we want to merge in smaller steps, then we could consider merging the current branch feature-262-CVODES. Right now all I have done is to replace CVODE with CVODES. Since CVODES is 100% compatible to CVODE, there is no code change right now to the current CVODE+coupled code and all tests do pass!

In case we want to swap out CVODE for CVODES now, then I would branch of this branch in order to include the really big change this switch brings.

Just let me know what is best here... otherwise I proceed an populate this feature branch with my code.

@syclik
Copy link
Member

syclik commented Mar 18, 2016

Sebastian, thanks for asking Frederic about it.

  1. For those that are interested in it, the data, Frederic's original
    model, and my replication efforts are on a private repo:
    https://github.com/gelman/perc. It's private so the data doesn't float out
    until we know what the restrictions are. Part of the data is available
    through GNU mcsim.
  2. Frederic's original model doesn't fit. That's why I've been trying to
    break it down. I don't think using the original model as a benchmark is
    appropriate since it's clear that we haven't figured it out and we may be
    timing something that isn't relevant to typical useage.

On Fri, Mar 18, 2016 at 9:39 AM, wds15 notifications@github.com wrote:

Hi!

I am asking Frederic about the PERC model right now, will let you know
once I have feedback from him. I can provide other large (published) models
which show the same problems when being solved with cvode+coupled instead
of CVODES.

In case we want to merge in smaller steps, then we could consider merging
the current branch feature-262-CVODES. Right now all I have done is to
replace CVODE with CVODES. Since CVODES is 100% compatible to CVODE, there
is no code change right now to the current CVODE+coupled code and all tests
do pass!

In case we want to swap out CVODE for CVODES now, then I would branch of
this branch in order to include the really big change this switch brings.

Just let me know what is best here... otherwise I proceed an populate this
feature branch with my code.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#262 (comment)

@wds15
Copy link
Contributor Author

wds15 commented Mar 18, 2016

for the purpose of an ODE benchmark, we don't need the data. It would be nice, yes; but if we only are out for the ODE performance, then the ODE and a parameter set are sufficient. And as I said, if we don't trust the PERC model, I can provide other (published) examples. Let me prepare one of those.

I take that we do this in a single go (or cherry pick later smaller part of this branch). Great.

@syclik
Copy link
Member

syclik commented Mar 18, 2016

Thanks. Please provide another example until we can get the perc model
working.

I didn't understand your last comment.

On Fri, Mar 18, 2016 at 9:53 AM, wds15 notifications@github.com wrote:

for the purpose of an ODE benchmark, we don't need the data. It would be
nice, yes; but if we only are out for the ODE performance, then the ODE and
a parameter set are sufficient. And as I said, if we don't trust the PERC
model, I can provide other (published) examples. Let me prepare one of
those.

I take that we do this in a single go (or cherry pick later smaller part
of this branch). Great.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#262 (comment)

@wds15
Copy link
Contributor Author

wds15 commented Mar 19, 2016

I think I found a great example: The Hornberg model is described in the reference DOI: 10.1111/j.1432-1033.2004.04404.x is a great case. Unfortunately the paper itself is paywalled, but the model itself is freely available here (this link is right next to the abstract):

http://jjj.biochem.sun.ac.za/database/hornberg/index.html

This model has 8 states+18 parameters which describe a system of reactions with a number of michaelis-menten kinetics. This is a large, but very typical use-case I suppose.

The model one can download includes the equations, initial values and parameters to simulate. I suggest to put this into an executable test which I propose to put on an independent branch such that we can merge this into develop eventually and also merge this into the CVODES branch. I hope that makes sense.

@bob-carpenter
Copy link
Contributor

Sounds great.

On Mar 19, 2016, at 10:15 AM, wds15 notifications@github.com wrote:

I think I found a great example: The Hornberg model is described in the reference DOI: 10.1111/j.1432-1033.2004.04404.x is a great case. Unfortunately the paper itself is paywalled, but the model itself is freely available here (this link is right next to the abstract):

http://jjj.biochem.sun.ac.za/database/hornberg/index.html

This model has 8 states+18 parameters which describe a system of reactions with a number of michaelis-menten kinetics. This is a large, but very typical use-case I suppose.

The model one can download includes the equations, initial values and parameters to simulate. I suggest to put this into an executable test which I propose to put on an independent branch such that we can merge this into develop eventually and also merge this into the CVODES branch. I hope that makes sense.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub

@wds15
Copy link
Contributor Author

wds15 commented Mar 19, 2016

Ok, here it goes!

The stiff stress test is taking ages with the old cvode implementation. The same test is now also included in the CVODES branch which I just pushed to the repository. The test can be triggered with

/runTests.py test/unit/math/rev/arr/functor/integrate_ode_perf_test.cpp

The CVODES branch currently passes all integrated_ode_cvode* tests. However, the coupled_ode_system_cvode* tests do NOT pass at the moment as I have replaced the coupled_ode_system_cvode with the cvodes_integrator object. From what I can see, it would make sense to change the coupled_ode_system_cvode tests such that they test instead the cvodes_integrator. I am happy to do that.

However, I guess first we should settle on the design of things as I did change a few things. Most importantly, I do not anymore use a shifted system when initials vary. Moreover, the code I supply can serve as a refactoring base for the odeint based integrator.

Finally, I am also providing another hornberg test which demonstrates how one can use partial template specialization in order to provide analytic Jacobians.

@wds15
Copy link
Contributor Author

wds15 commented Mar 20, 2016

I have ported all cvode related tests to the new code. I renamed in the folder test/unit/math/rev/arr/functor/

  • coupled_ode_system_cvode_prim_test.cpp to cvodes_integrator_prim_test.cpp
  • coupled_ode_system_cvode_rev_test.cpp to cvodes_integrator_rev_test.cpp

and ported each test to the slightly updated logic. I suppose these two files are a good reference to demonstrate how the cvodes integrator works. ALL cvode related tests have now been ported to the new system and ALL tests do pass.

The source code contains at the moment here and there still some comments to make it easier to grasp what changed how. For the final merge we can delete a number of those comments.

Moreover, I would like to enable the non-stiff solver from cvode such that I would propose to add a further "solver" argument to the integrate_ode_cvode function. The code is prepared for that, but I am not sure if you agree on this.

@bob-carpenter
Copy link
Contributor

That all sounds great. @betanalpha, can you weigh in on this?
I'm a bit out of my depth with the ODEs and any potential
gotchas.

@wds15: That all sounds great! When you say you don't use a
shifted system, do you compute the same function? If so,
it shouldn't be a problem.

@betanalpha: can you take a look at this and weigh in on any
potential issues?

  • Bob

On Mar 19, 2016, at 3:33 PM, wds15 notifications@github.com wrote:

Ok, here it goes!

The stiff stress test is taking ages with the old cvode implementation. The same test is now also included in the CVODES branch which I just pushed to the repository. The test can be triggered with

/runTests.py test/unit/math/rev/arr/functor/integrate_ode_perf_test.cpp

The CVODES branch currently passes all integrated_ode_code* tests. However, the coupled_ode_system_code* tests do NOT pass at the moment as I have replaced the coupled_ode_system_cvode with the cvodes_integrator object. From what I can see, it would make sense to change the coupled_ode_system_cvode tests such that they test instead the cvodes_integrator. I am happy to do that.

However, I guess first we should settle on the design of things as I did change a few things. Most importantly, I do no anymore use a shifted system when initials vary. Moreover, the code I supply can serve as a refactoring base for the odeint based integrator.

Finally, I am also providing another hornberg test which demonstrates how one can use partial template specialization in order to provide analytic Jacobians.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub

@bob-carpenter
Copy link
Contributor

We can add a new function, but we don't want to break the
old one. If the new argument is just going to be a string,
then I'd suggest rolling it into the function name, like:

integrate_ode_cvodes(...);

and then our existing

inegrate_ode(...)

can delegate to one of the versions. From the semantic side
of the parser, it's almost as easy to pull it out of the name
as out of an argument.

Others may have better ideas for naming.

We can deprecate the old function and get rid of it in
the future.

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

I am definetly calculating the very same function, of course. In fact, all the dy_dt's are exactly the same, the only difference is how the initials are set. The test file cvodes_integrator_rev_test.cpp shows how this differs.

I am proposing to leave the integrate_ode function just as it is. All I am suggesting is to change the function signature of

integrate_ode_cvode

, i.e. I propose to add a further "solver" argument which I suggest to be an integer. To my knowledge the function integrate_ode_cvode was never released, but only "lived" on the develop branch. This is why I thought changing the signature at this stage would be ok. Specifically I would like to have the solver definitions to be

  • 0 = non-stiff (Adams-Moulton)
  • 1 = stiff (BDF)
  • 2 = stiff (BDF) with STaLD

The STaLD algorithm is a stability limit detection algorithm for BDF. It does cost you 5-7% performance and is in rare cases (according to the documentation of CVODES) needed in order to guarantee numerical stability of the BDF method. In case I manage to figure out how to compile CVODES with the MKL, then we could add another stiff solver which uses the MKL - given this is noticeable in speedup and can be easily deployed.

To proceed, should I kick off a pull request?

@betanalpha
Copy link
Contributor

I think we need to step back and more carefully understand some underlying issues.

  1. Firstly, I don’t trust anything with abs/rel sensitivities any higher than O(10^{-8}).
    When I was doing testing I kept getting really inaccurate, as in O(10^{-3)) absolute
    error, solutions with higher solution sensitivities, even in simple models. These
    large errors can invalidate everything we do and so we have to be sure that we’re
    getting accurate solutions (i.e. the Metropolis correction does not fix this as it
    biases the Metropolis correction itself). This carries over to all of the switching
    between algorithms — there is no trade off on accuracy vs performance, there
    is only performance conditioned on sufficient accuracy.

  2. Why is CVODES supposedly so much faster than CVODE with our AD computing
    the Jacobian? I’m not even sure how CVODES can compute sensitivities given
    the general complexity of our language — are we sure that it’s only been testing
    on simple models that just use simple algebraic calculations? And even if it can,
    how is it so much faster than our AD? I can see indirection being slow, or even
    CVODES being able to ensure better memory locality, but for that to lead to such
    a speed up is suspicious to me.

  3. I’m still hesitant about spending too much effort on having self-implementable
    gradients. Sebastian might find this useful but most of our users are using Stan
    exactly because they don’t want to deal with such complications. If there are
    such speed ups possible in typical PK/PD models then it’s much more useful
    to implement exact gradients for various systems internally as prepackaged
    ODE functors written in C++.

On Mar 20, 2016, at 6:47 PM, Bob Carpenter notifications@github.com wrote:

That all sounds great. @betanalpha, can you weigh in on this?
I'm a bit out of my depth with the ODEs and any potential
gotchas.

@wds15: That all sounds great! When you say you don't use a
shifted system, do you compute the same function? If so,
it shouldn't be a problem.

@betanalpha: can you take a look at this and weigh in on any
potential issues?

  • Bob

On Mar 19, 2016, at 3:33 PM, wds15 notifications@github.com wrote:

Ok, here it goes!

The stiff stress test is taking ages with the old cvode implementation. The same test is now also included in the CVODES branch which I just pushed to the repository. The test can be triggered with

/runTests.py test/unit/math/rev/arr/functor/integrate_ode_perf_test.cpp

The CVODES branch currently passes all integrated_ode_code* tests. However, the coupled_ode_system_code* tests do NOT pass at the moment as I have replaced the coupled_ode_system_cvode with the cvodes_integrator object. From what I can see, it would make sense to change the coupled_ode_system_cvode tests such that they test instead the cvodes_integrator. I am happy to do that.

However, I guess first we should settle on the design of things as I did change a few things. Most importantly, I do no anymore use a shifted system when initials vary. Moreover, the code I supply can serve as a refactoring base for the odeint based integrator.

Finally, I am also providing another hornberg test which demonstrates how one can use partial template specialization in order to provide analytic Jacobians.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

Hi!
Let me try to answer:

  1. To my understanding the solution from CVODES will be better behaved in terms of accuracy as I do not use a shifted system and hence the abs+rel tolerances are applied to the unshifted values we are interested in. In fact, I have programmed a test which compares ODE solutions against an analytic solution. The tests I have done show accuracy within 1e-5, I haven't tested higher accuracy.

  2. I can only speculate what breaks the performance of cvode+coupled system. My guess would be that it has problems finding a suitable step-size. Wrt to CVODES: The implementation uses AD from Stan to get all needed Jacobian information. Nothing changes in this regard, there is really no difference. The main difference comes from how the mathematical problem is approached within CVODES. That is, CVODES takes advantage of the specific structure of the problem. As you have proven, the Jacobian of the coupled system has a special structure. The difference between cvode+coupled and CVODES is mainly that CVODES first solves the main (base) system and then solves for the sensitivites (once the main solution has converged). Apparently this works out better than solving all at once.

  3. user-supplied gradients would be a great thing from my perspective, but we can discuss this later. I agree that it is tedious to code them and most users would not do it. In fact, the example I included uses a Jacobian which was generated automatically in symbolic form for me by a script. So if we provide such an option, then we certainly want to point out automatic approaches to generate the symbolic Jacobian. Of course, pre-coded ODE systems as part of the Stan library would be another good option.

@betanalpha
Copy link
Contributor

  1. To my understanding the solution from CVODES will be better behaved in terms of accuracy as I do not use a shifted system and hence the abs+rel tolerances are applied to the unshifted values we are interested in. In fact, I have programmed a test which compares ODE solutions against an analytic solution. The tests I have done show accuracy within 1e-5, I haven't tested higher accuracy.

Yeah, that’s what I saw in simple problems. And that’s just not good enough for our
purposes — that’s a large enough bias to completely mess up the accuracy of the
MCMC estimators and we have no way of correcting it.
2) I can only speculate what breaks the performance of cvode+coupled system. My guess would be that it has problems finding a suitable step-size. Wrt to CVODES: The implementation uses AD from Stan to get all needed Jacobian information. Nothing changes in this regard, there is really no difference. The main difference comes from how the mathematical problem is approached within CVODES. That is, CVODES takes advantage of the specific structure of the problem. As you have proven, the Jacobian of the coupled system has a special structure. The difference between cvode+coupled and CVODES is mainly that CVODES first solves the main (base) system and then solves for the sensitivites (once the main solution has converged). Apparently this works out better than solving all at once.

Does it take in a Jacobian callback or calculate it internally using finite diffs?
3) user-supplied gradients would be a great thing from my perspective, but we can discuss this later. I agree that it is tedious to code them and most users would not do it. In fact, the example I included uses a Jacobian which was generated automatically in symbolic form for me by a script. So if we provide such an option, then we certainly want to point out automatic approaches to generate the symbolic Jacobian. Of course, pre-coded ODE systems as part of the Stan library would be another good option.

I think right now we don’t want to burden ourselves with additional maintenance of
trying to expose such a feature. In other words, this should be saved for last.

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

  1. The CVODES library talks about introducing scaling factors for the senstivities in order to improve accuracy. In short, it is useful to provide "order-of-magnitude" information. This is what the manual says. Since the sensitivity wrt to a parameter p is related to the value of the parameter itself, we could try to look into this. I haven't pushed CVODES to more than 1E-5, let me try.

  2. To answer your question: Of course, CVODES uses callbacks in order to get Jacobian information from Stan using AD. There is no finite difference being done here.

.. and to add to my previous answer: CVODES is CVODE with Sensitivity support. So this is an extension to CVODE which specifically adresses sensitivity stuff. There has to be a reason why sundials provides a specific software suite for ODE+Sensitivity problems and the ODE stiff benchmark is an example of this. So, yes, using CVODES gives such a large speedup not only due to better memory locality, but mostly due to better algorithms to solve the ODE+sensitivity problem.

@betanalpha
Copy link
Contributor

  1. The CVODES library talks about introducing scaling factors for the senstivities in order to improve accuracy. In short, it is useful to provide "order-of-magnitude" information. This is what the manual says. Since the sensitivity wrt to a parameter p is related to the value of the parameter itself, we could try to look into this. I haven't pushed CVODES to more than 1E-5, let me try.

Let's be careful to differentiate solver sensitivities (which control the accuracy of numerical solutions)
and the ODE sensitivities (which are the derivatives that we want).

I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,
which is the typical error that the nominal settings in the original Boost integrator and CVODE
were giving,

  1. To answer your question: Of course, CVODES uses callbacks in order to get Jacobian information from Stan using AD. There is no finite difference being done here.

.. and to add to my previous answer: CVODES is CVODE with Sensitivity support. So this is an extension to CVODE which specifically adresses sensitivity stuff. There has to be a reason why sundials provides a specific software suite for ODE+Sensitivity problems and the ODE stiff benchmark is an example of this. So, yes, using CVODES gives such a large speedup not only due to better memory locality, but mostly due to better algorithms to solve the ODE+sensitivity problem.

Yeah I get the difference between CVODE and CVODES now — CVODES is using a different
numerical solver and not just trying to compute everything internally. So we have to pass in
a Jacobian for CVODES to define the sensitivity solutions, but what about the Jacobian of
the sensitivity system? The one that requires second-order derivatives? This is the
“banded Jacobian” in the current implementation of the CVODE solver in Stan.

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

  1. Well, maybe we look into the cvodes user manual, they talk about accuracy considerations; as I say, in order to improve the accuracy of the ODE sensitivities, they recommend to define scaling factors. The basic observation here being that the accuracy of df/dp is related to p.

  2. From what I understand from the cvodes user manual is that they use a "modified" Newton iteration to solve the sensitivity system. This modified Newton iteration reuses the Jacobian of the base system. So they take advantage of the special structure and reuse Jacobian information as much as possible. From reading the user guide I understand that they internally setup such a banded system and solve that. This is all detailed in the user manual of cvodes, please have a look at their mathematical considerations section.

@betanalpha
Copy link
Contributor

No, no, no. I’m saying that the absolute and relative sensitivities that specify
the accuracy of the numerical solutions to f (and then to the numerical solutions
of the df/dp as they’re solved in a similar fashion) which are currently default
are not sufficient for statistical use. We need to tune the solvers to achieve
higher accuracy by default and test for this higher accuracy in the unit tests.

On Mar 21, 2016, at 12:49 PM, wds15 notifications@github.com wrote:

  1. Well, maybe we look into the cvodes user manual, they talk about accuracy considerations; as I say, in order to improve the accuracy of the ODE sensitivities, they recommend to define scaling factors. The basic observation here being that the accuracy of df/dp is related to p.

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

Well, CVODES offers more control over the absolute and relative tolerances for the sensitivities. In fact, if you want to, you can set these for each parameter sensitivity separatley. So CVODES should let us tune things to higher accuracy.

Although, I have to say that I don't yet fully understand why such high accuracy sensitivites are needed. Do you have examples somewhere which show what goes wrong? I mean, I am getting very good inferences from Stan when using ODEs. Maybe things are not ultra-precise, but in my applications the accuracy is sufficient enough. At least from what I can say.

But to come back to CVODES: The implementation passes all tests which were written for CVODE; what else is needed to merge this? The increased accuracy story is separate to this and should be dealt with in another issue+pull request, no?

@betanalpha
Copy link
Contributor

I AM NOT TALKING ABOUT JUST THE ACCURACY OF THE SENSITIVITIES.
I am suspicious of much of our ODE tests because they rely on defaults that
lead to numerical state solutions (and sensitivities, but ignore them as apparently
they’re just confusing things) which quite large errors. We have no way to correct
these errors, they consequently propagate through our entire system and bias
the MCMC estimators and we have NO IDEA HOW LARGE THIS EFFECT MIGHT
BE. Our standard should not be “looks good enough by eye in this problem where
I don’t know what the correct mean and variances are” but rather “we can constrain
the absolute error well enough that our implementation is robust to almost all use
cases”.

Would everyone be happy if our exp or log implementations had absolute accuracy
of only 10^{-3} or 10^{-5}?

This point is relevant to this and the other PR because they are based on
performance, which requires us defining what our threshold for accuracy should be.
I tried to ameliorate this a bit in the CVODE implementation by dropping the abs/rel
defaults to 1e-10 which gave about 1e-8 errors. But if we’re going to start comparing
different solvers to the point of advocating one over another then we need to define
what accuracy we want to require.

On Mar 21, 2016, at 1:39 PM, wds15 notifications@github.com wrote:

Well, CVODES offers more control over the absolute and relative tolerances for the sensitivities. In fact, if you want to, you can set these for each parameter sensitivity separatley. So CVODES should let us tune things to higher accuracy.

Although, I have to say that I don't yet fully understand why such high accuracy sensitivites are needed. Do you have examples somewhere which show what goes wrong? I mean, I am getting very good inferences from Stan when using ODEs. Maybe things are not ultra-precise, but in my applications the accuracy is sufficient enough. At least from what I can say.

But to come back to CVODES: The implementation passes all tests which were written for CVODE; what else is needed to merge this? The increased accuracy story is separate to this and should be dealt with in another issue+pull request, no?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@wds15
Copy link
Contributor Author

wds15 commented Mar 21, 2016

No one expects ODEs to be so super-precised solved. If that matters for NuTS, ok then this is a different story. Of course, we have to define at what tolerances we want comparisons to be run.

I followed your suggestion of putting 1E-10 abs+rel tolerances in the test case I provided. The old CVODE+coupled system does fail here again while the CVODES implementation passes this test.

@wds15
Copy link
Contributor Author

wds15 commented Mar 22, 2016

Hi!

I added a further performance test case, i.e. this time one which is possibly a minimal example. The ODE model consists of 2 states and 4 parameters only. Essentially, one state becomes activated and then becomes deactivated again. Both reactions are governed by a michaelis menten type rate . The ODE system is

ydot[0] = -1*(act*y[0]/(KmA+y[0]))+1*(deact*y[1]/(KmAp+y[1]));
ydot[1] =  1*(act*y[0]/(KmA+y[0]))-1*(deact*y[1]/(KmAp+y[1]));

Moreover, I increased absolute and relative tolerance to 1E-10. The new smaller test case does again fail with the old cvode+coupled system which manages to integrate 24 times the system within a minute time. The new CVODES system instead runs 100 integrations in 0.1s.

Michaelis-Menten ODEs are a primary application area for the ODE solver in Stan. At least for my application purposes.

@bob-carpenter
Copy link
Contributor

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob

@betanalpha
Copy link
Contributor

The problem is that the ODE errors can potentially aggregate coherently over multiple time
points and multiple systems (for example, multiple patients).

@wds15, here is what I think needs to be done before a pull request.

  1. Study the effect of ODE error on final results.

1a) Take a few ODE models that span typical applications, including one exact model
(such as the one in test/unit/math/rev/arr/functor/integrate_ode_cvode_grad_rev_test.cpp).
I suggest an SIR model and a hierarchical PK/PD type model � the ones I just sent to
the dev list for R tests would be reasonable.

1b) Run with CVODE (or CVODES if you prefer) with very small rel and abs tolerances
(see if 1e-14 or even 1e-16 will run in a reasonable amount of time) and many samples
to get �exact� values of the parameter means, variances, and MCMC standard errors. In
addition, take a few posterior samples and get the ODE states vs time, treating these as
�exact� values as well.

1c) Now scan through abs/rel tolerances, say jumping 2 orders of magnitude at each
step until you get to 1e-2. For each tolerance record the parameter means, variances,
standard errors, and the states vs time for the above posterior samples.

Use the states vs time to quantify the actual error in the ODE solver for each setting.

Plot actual error vs abs/rel tolerance and the mean/variances/standard errors vs
abs/rel tolerance.

1d) We should hopefully see a cut off where the abs/rel tolerance is low enough that
there are no effects on the MCMC estimation. We then set that at the benchmark
in our tests and tune the existing ODE solver defaults to achieve that benchmark.

Then we�ll be in a position to properly judge new samplers.

  1. Add CVODES as a new solver OR replace CVODE with CVODES. I�m fine with
    either, but either copy the existing CVODE implementation or just drop in CVODES
    for CVODE. Don�t change any of the architecture yet.

  2. Then we can talk about further changes.

On Mar 22, 2016, at 8:03 PM, Bob Carpenter notifications@github.com wrote:

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@bob-carpenter
Copy link
Contributor

I understand what (and why) you want to test. So I think
we should start with our existing solvers. It sounds from
Sebastian's experiments that the two existing solvers will fail
at 1e-10 tolerances whereas CVODES will succeed. That'd be
a very convincing case for including it above and beyond the
speed (the problem is that we only care about speed conditioned
on getting a good enough result for our applications).

  • Bob

On Mar 22, 2016, at 5:17 PM, Michael Betancourt notifications@github.com wrote:

The problem is that the ODE errors can potentially aggregate coherently over multiple time
points and multiple systems (for example, multiple patients).

@wds15, here is what I think needs to be done before a pull request.

  1. Study the effect of ODE error on final results.

1a) Take a few ODE models that span typical applications, including one exact model
(such as the one in test/unit/math/rev/arr/functor/integrate_ode_cvode_grad_rev_test.cpp).
I suggest an SIR model and a hierarchical PK/PD type model � the ones I just sent to
the dev list for R tests would be reasonable.

1b) Run with CVODE (or CVODES if you prefer) with very small rel and abs tolerances
(see if 1e-14 or even 1e-16 will run in a reasonable amount of time) and many samples
to get �exact� values of the parameter means, variances, and MCMC standard errors. In
addition, take a few posterior samples and get the ODE states vs time, treating these as
�exact� values as well.

1c) Now scan through abs/rel tolerances, say jumping 2 orders of magnitude at each
step until you get to 1e-2. For each tolerance record the parameter means, variances,
standard errors, and the states vs time for the above posterior samples.

Use the states vs time to quantify the actual error in the ODE solver for each setting.

Plot actual error vs abs/rel tolerance and the mean/variances/standard errors vs
abs/rel tolerance.

1d) We should hopefully see a cut off where the abs/rel tolerance is low enough that
there are no effects on the MCMC estimation. We then set that at the benchmark
in our tests and tune the existing ODE solver defaults to achieve that benchmark.

Then we�ll be in a position to properly judge new samplers.

  1. Add CVODES as a new solver OR replace CVODE with CVODES. I�m fine with
    either, but either copy the existing CVODE implementation or just drop in CVODES
    for CVODE. Don�t change any of the architecture yet.

  2. Then we can talk about further changes.

On Mar 22, 2016, at 8:03 PM, Bob Carpenter notifications@github.com wrote:

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub

@betanalpha
Copy link
Contributor

Right. I have no objections to putting CVODES in
once Sebastian explained that it used our autodiff
with callbacks, I just want to make sure we formalize
our testing thresholds now that we’re transitioning
from a solver to multiple solvers.

On Mar 22, 2016, at 9:56 PM, Bob Carpenter notifications@github.com wrote:

I understand what (and why) you want to test. So I think
we should start with our existing solvers. It sounds from
Sebastian's experiments that the two existing solvers will fail
at 1e-10 tolerances whereas CVODES will succeed. That'd be
a very convincing case for including it above and beyond the
speed (the problem is that we only care about speed conditioned
on getting a good enough result for our applications).

  • Bob

On Mar 22, 2016, at 5:17 PM, Michael Betancourt notifications@github.com wrote:

The problem is that the ODE errors can potentially aggregate coherently over multiple time
points and multiple systems (for example, multiple patients).

@wds15, here is what I think needs to be done before a pull request.

  1. Study the effect of ODE error on final results.

1a) Take a few ODE models that span typical applications, including one exact model
(such as the one in test/unit/math/rev/arr/functor/integrate_ode_cvode_grad_rev_test.cpp).
I suggest an SIR model and a hierarchical PK/PD type model � the ones I just sent to
the dev list for R tests would be reasonable.

1b) Run with CVODE (or CVODES if you prefer) with very small rel and abs tolerances
(see if 1e-14 or even 1e-16 will run in a reasonable amount of time) and many samples
to get �exact� values of the parameter means, variances, and MCMC standard errors. In
addition, take a few posterior samples and get the ODE states vs time, treating these as
�exact� values as well.

1c) Now scan through abs/rel tolerances, say jumping 2 orders of magnitude at each
step until you get to 1e-2. For each tolerance record the parameter means, variances,
standard errors, and the states vs time for the above posterior samples.

Use the states vs time to quantify the actual error in the ODE solver for each setting.

Plot actual error vs abs/rel tolerance and the mean/variances/standard errors vs
abs/rel tolerance.

1d) We should hopefully see a cut off where the abs/rel tolerance is low enough that
there are no effects on the MCMC estimation. We then set that at the benchmark
in our tests and tune the existing ODE solver defaults to achieve that benchmark.

Then we�ll be in a position to properly judge new samplers.

  1. Add CVODES as a new solver OR replace CVODE with CVODES. I�m fine with
    either, but either copy the existing CVODE implementation or just drop in CVODES
    for CVODE. Don�t change any of the architecture yet.

  2. Then we can talk about further changes.

On Mar 22, 2016, at 8:03 PM, Bob Carpenter notifications@github.com wrote:

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@bob-carpenter
Copy link
Contributor

Do these tests already exist for the first two solvers
(Runge-Kutta from Boost and CVODE from Sundials)?

  • Bob

On Mar 22, 2016, at 5:58 PM, Michael Betancourt notifications@github.com wrote:

Right. I have no objections to putting CVODES in
once Sebastian explained that it used our autodiff
with callbacks, I just want to make sure we formalize
our testing thresholds now that we’re transitioning
from a solver to multiple solvers.

On Mar 22, 2016, at 9:56 PM, Bob Carpenter notifications@github.com wrote:

I understand what (and why) you want to test. So I think
we should start with our existing solvers. It sounds from
Sebastian's experiments that the two existing solvers will fail
at 1e-10 tolerances whereas CVODES will succeed. That'd be
a very convincing case for including it above and beyond the
speed (the problem is that we only care about speed conditioned
on getting a good enough result for our applications).

  • Bob

On Mar 22, 2016, at 5:17 PM, Michael Betancourt notifications@github.com wrote:

The problem is that the ODE errors can potentially aggregate coherently over multiple time
points and multiple systems (for example, multiple patients).

@wds15, here is what I think needs to be done before a pull request.

  1. Study the effect of ODE error on final results.

1a) Take a few ODE models that span typical applications, including one exact model
(such as the one in test/unit/math/rev/arr/functor/integrate_ode_cvode_grad_rev_test.cpp).
I suggest an SIR model and a hierarchical PK/PD type model � the ones I just sent to
the dev list for R tests would be reasonable.

1b) Run with CVODE (or CVODES if you prefer) with very small rel and abs tolerances
(see if 1e-14 or even 1e-16 will run in a reasonable amount of time) and many samples
to get �exact� values of the parameter means, variances, and MCMC standard errors. In
addition, take a few posterior samples and get the ODE states vs time, treating these as
�exact� values as well.

1c) Now scan through abs/rel tolerances, say jumping 2 orders of magnitude at each
step until you get to 1e-2. For each tolerance record the parameter means, variances,
standard errors, and the states vs time for the above posterior samples.

Use the states vs time to quantify the actual error in the ODE solver for each setting.

Plot actual error vs abs/rel tolerance and the mean/variances/standard errors vs
abs/rel tolerance.

1d) We should hopefully see a cut off where the abs/rel tolerance is low enough that
there are no effects on the MCMC estimation. We then set that at the benchmark
in our tests and tune the existing ODE solver defaults to achieve that benchmark.

Then we�ll be in a position to properly judge new samplers.

  1. Add CVODES as a new solver OR replace CVODE with CVODES. I�m fine with
    either, but either copy the existing CVODE implementation or just drop in CVODES
    for CVODE. Don�t change any of the architecture yet.

  2. Then we can talk about further changes.

On Mar 22, 2016, at 8:03 PM, Bob Carpenter notifications@github.com wrote:

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub

@betanalpha
Copy link
Contributor

Yes, although with somewhat arbitrary accuracy thresholds.
I want to motivate better thresholds before moving forwards.

On Mar 22, 2016, at 10:06 PM, Bob Carpenter notifications@github.com wrote:

Do these tests already exist for the first two solvers
(Runge-Kutta from Boost and CVODE from Sundials)?

  • Bob

On Mar 22, 2016, at 5:58 PM, Michael Betancourt notifications@github.com wrote:

Right. I have no objections to putting CVODES in
once Sebastian explained that it used our autodiff
with callbacks, I just want to make sure we formalize
our testing thresholds now that we’re transitioning
from a solver to multiple solvers.

On Mar 22, 2016, at 9:56 PM, Bob Carpenter notifications@github.com wrote:

I understand what (and why) you want to test. So I think
we should start with our existing solvers. It sounds from
Sebastian's experiments that the two existing solvers will fail
at 1e-10 tolerances whereas CVODES will succeed. That'd be
a very convincing case for including it above and beyond the
speed (the problem is that we only care about speed conditioned
on getting a good enough result for our applications).

  • Bob

On Mar 22, 2016, at 5:17 PM, Michael Betancourt notifications@github.com wrote:

The problem is that the ODE errors can potentially aggregate coherently over multiple time
points and multiple systems (for example, multiple patients).

@wds15, here is what I think needs to be done before a pull request.

  1. Study the effect of ODE error on final results.

1a) Take a few ODE models that span typical applications, including one exact model
(such as the one in test/unit/math/rev/arr/functor/integrate_ode_cvode_grad_rev_test.cpp).
I suggest an SIR model and a hierarchical PK/PD type model � the ones I just sent to
the dev list for R tests would be reasonable.

1b) Run with CVODE (or CVODES if you prefer) with very small rel and abs tolerances
(see if 1e-14 or even 1e-16 will run in a reasonable amount of time) and many samples
to get �exact� values of the parameter means, variances, and MCMC standard errors. In
addition, take a few posterior samples and get the ODE states vs time, treating these as
�exact� values as well.

1c) Now scan through abs/rel tolerances, say jumping 2 orders of magnitude at each
step until you get to 1e-2. For each tolerance record the parameter means, variances,
standard errors, and the states vs time for the above posterior samples.

Use the states vs time to quantify the actual error in the ODE solver for each setting.

Plot actual error vs abs/rel tolerance and the mean/variances/standard errors vs
abs/rel tolerance.

1d) We should hopefully see a cut off where the abs/rel tolerance is low enough that
there are no effects on the MCMC estimation. We then set that at the benchmark
in our tests and tune the existing ODE solver defaults to achieve that benchmark.

Then we�ll be in a position to properly judge new samplers.

  1. Add CVODES as a new solver OR replace CVODE with CVODES. I�m fine with
    either, but either copy the existing CVODE implementation or just drop in CVODES
    for CVODE. Don�t change any of the architecture yet.

  2. Then we can talk about further changes.

On Mar 22, 2016, at 8:03 PM, Bob Carpenter notifications@github.com wrote:

On Mar 21, 2016, at 7:48 AM, Michael Betancourt notifications@github.com wrote:

...
I am here referring to the former and how accurate we need the numerical solutions to be before
the numerical errors significantly biases our MCMC estimators. My intuition is that 1e-5 is too large,

My guess is that while 1e-5 may be a problem, I doubt 1e-7 is going
to be. My thinking is that until the error's higher than MCMC error,
it's not going to matter much. If our outcomes are order 1e0 and our
posterior std devs are order 1e-2, then error of 1e-7 is barely worth
worrying about --- it's on the order of MCMC error (and yes, I understand
that the error here is biased, not just noisy like MCMC error).

I may be way off base, of course!

  • Bob


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@wds15
Copy link
Contributor Author

wds15 commented Mar 23, 2016

I fully agree that a good understanding of ODE estimation error and MCMC error is very valuable. However, I have to say that I doubt that we can ever cover the space of models/parameters reasonably well as there are too many unkowns (and unknown unkowns). That said, I can use a program I have written a while ago to look a bit into this. This problem is a hierarchical PK/PD problem where I simulate from some true values and then use a Stan program which I can switch between using an analytic solution OR the ode integrator. I guess this would be a good base to do such a study - and I am definetly myself very interested in such results as I am interested in the question of how low I can set the precision and still be safe.

For the stiff ODE performance tests I am not so sure if we need to tune this to ultra-high precision. I mean these are tough problems and I am happy to live with a somewhat biased solution rather than none. I know that I am coming here a bit from the other side yes, and I do agree that understanding bias also in these settings is valuable, I just doubt it will be practical to insist on ulta-high precision (although I have set CVODES to 1E-10 which should be a "true" 1E-8).

I guess you speak of the *_grad_test.cpp tests which do test at precisions of 1E-8 and 1E-7. To me that sounds enough, but for the moment we could just bump that to a higher precision (1E-10?) and possibly later revise this setting after all those tests? I mean, the benefit of CVODES can be very large for our users.

@betanalpha
Copy link
Contributor

You are completely speculating here. If we’re going to be comparing
solvers then we need to be more careful, and the studies I laid out
are straightforward and at least partially quantify the effect of the
solver tolerances.

On Mar 23, 2016, at 10:41 AM, wds15 notifications@github.com wrote:

I fully agree that a good understanding of ODE estimation error and MCMC error is very valuable. However, I have to say that I doubt that we can ever cover the space of models/parameters reasonably well as there are too many unkowns (and unknown unkowns). That said, I can use a program I have written a while ago to look a bit into this. This problem is a hierarchical PK/PD problem where I simulate from some true values and then use a Stan program which I can switch between using an analytic solution OR the ode integrator. I guess this would be a good base to do such a study - and I am definetly myself very interested in such results as I am interested in the question of how low I can set the precision and still be safe.

For the stiff ODE performance tests I am not so sure if we need to tune this to ultra-high precision. I mean these are tough problems and I am happy to live with a somewhat biased solution rather than none. I know that I am coming here a bit from the other side yes, and I do agree that understanding bias also in these settings is valuable, I just doubt it will be practical to insist on ulta-high precision (although I have set CVODES to 1E-10 which should be a "true" 1E-8).

I guess you speak of the *_grad_test.cpp tests which do test at precisions of 1E-8 and 1E-7. To me that sounds enough, but for the moment we could just bump that to a higher precision (1E-10?) and possibly later revise this setting after all those tests? I mean, the benefit of CVODES can be very large for our users.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@bob-carpenter
Copy link
Contributor

The fact that we can't cover every contingency with a suite
of benchmarks shouldn't stop us from building a couple just to see
how things work. As Sebastian says, we want to know this in
order to provide advice to users anyway.

I believe all that Michael's asking for here is to show that
CVODES works at least as well as CVODE in terms of precision
and that it's faster, which would motivate replacing CVODE.

Even if we weren't comparing CVODES, we'd want these tests
anyway.

Given that we will have user-settable precision, the decision
on whether to sacrifice some bias for some speed will be up
to users anyway.

  • Bob

@wds15
Copy link
Contributor Author

wds15 commented Mar 23, 2016

Yes, as I said, I can run those tests on a model I have prepared. All simulated stuff and direct comparison with an analytic solution, I guess this is the best comparison we can do. Agree?

Given that we consider these tolerances as very important, I could enable the possibility to set an absolute tolerance PER state within CVODES (which will appropiatley propagate to the sensitivities). Does that make sense? So we would have two functions, one taking a vector of abs_tols and one taking just a real which gets applied to all states. Then the user can decide for which state he is willing to pay a large performance price or not. This is recommended practice from the CVODES user manual.

Wrt. to precisions: There are tests in Stan which are set to 1E-7 and 1E-8 right now and CVODES does pass these, just as CVODE does. To expedite this pull request I was suggesting to upping that to 1E-10 for now... higher precisions will anyway be a problem for about any solver as the double machine precision is around 1E-14.

@betanalpha
Copy link
Contributor

I was very clear in my original email what we need to first see.
An ensemble of models (I can provide explicit models if this is
a problem) and scan of tolerances to see the ultimate effect
on inferences across those models.

On Mar 23, 2016, at 4:20 PM, wds15 notifications@github.com wrote:

Yes, as I said, I can run those tests on a model I have prepared. All simulated stuff and direct comparison with an analytic solution, I guess this is the best comparison we can do. Agree?

Given that we consider these tolerances as very important, I could enable the possibility to set an absolute tolerance PER state within CVODES (which will appropiatley propagate to the sensitivities). Does that make sense? So we would have two functions, one taking a vector of abs_tols and one taking just a real which gets applied to all states. Then the user can decide for which state he is willing to pay a large performance price or not. This is recommended practice from the CVODES user manual.

Wrt. to precisions: There are tests in Stan which are set to 1E-7 and 1E-8 right now and CVODES does pass these, just as CVODE does. To expedite this pull request I was suggesting to upping that to 1E-10 for now... higher precisions will anyway be a problem for about any solver as the double machine precision is around 1E-14.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@wds15
Copy link
Contributor Author

wds15 commented Apr 4, 2016

Hi!

Not sure where to put this, but I don't want to spam the dev list (maybe I should?) with this. So here are the results for a hierarchical PK/PD model. To be specific, I simulated 50 patients of a 2-cmt model oral dosing situation for typical parameter values I use to see. Two doses (t=0 and t=24) are given and I compare against an analytic solution which I placed on the very left of the plots with abs_tol=1E-16 for machine precision. Now, I had to use the CVODES implementation as the CVODE+coupled did freeze Stan for 3 days while CVODES returned the results in few hours, depending on the abs+rel tolerance set. I used a 1E-9, 1E-6 and a 1E-3 abs+rel tolerance. Have a look yourself, but I can't find any threshold. I used 4 chains and per chain I did 500 warmups and 500 iterations. True values are

   theta_true <-  c(0.326634259978281, 0, 2.30258509299405, 1.6094379124341, 3.68887945411394)
  omega_true <-  c(0.3, 0.4)
  sigma_y_true <- 0.2

metrics.pdf

@wds15
Copy link
Contributor Author

wds15 commented Apr 5, 2016

So all tests are brought in line with at least 1E-6 abs+rel tolerance and I added a

integrate_ode_cvodes

function which takes a solver argument. Of course, we can quickly adapt to whatever naming convention we end up with (probable the _stiff naming).

@bob-carpenter , @betanalpha a code review from one of you would be great. At the moment a high-level feedback would be very useful for me to shape this thing up for a pull request.

There are still cpplint errors... will attack them soon.

@bob-carpenter
Copy link
Contributor

I can do a basic code review tomorrow. Michael will be
better at picking up math issues, but I can do the basics.

On Apr 5, 2016, at 6:50 PM, wds15 notifications@github.com wrote:

So all tests are brought in line with at least 1E-6 abs+rel tolerance and I added a

integrate_ode_cvodes

function which takes a solver argument. Of course, we can quickly adapt to whatever naming convention we end up with (probable the _stiff naming).

@bob-carpenter , @betanalpha a code review from one of you would be great. At the moment a high-level feedback would be very useful for me to shape this thing up for a pull request.

There are still cpplint errors... will attack them soon.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@wds15 wds15 closed this as completed May 10, 2016
@syclik syclik modified the milestone: v2.11.0 Jul 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants