Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated version #82

Merged
merged 24 commits into from
Aug 22, 2020
Merged

Updated version #82

merged 24 commits into from
Aug 22, 2020

Conversation

arnaudmgh
Copy link
Contributor

In collaboration with @Vaibhavdixit02.

  • Corrected an inversion in parameters gamma and delta that produced wrong results
  • Added plots of retrodicted time courses from the posterior
  • Added 3 chains per problem, one using a single thread with mapreduce, one using Threads
  • As the SDE example as a few problems, numerical and conceptual, we replaced with a DDE problem that run fast and worked.
  • Moved DiffEqBayes section lower, as now all functions of DifferentialEquations.jl and Turing.jl are composable.
  • The problem using sensitivity runs the slowest - probably because it is made for larger problem and has a bigger overhead...
  • Added a small comment specifying that ~ is the tilde character and not \sim.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

Review Jupyter notebook visual diffs & provide feedback on notebooks.


Powered by ReviewNB

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

Vaibhavdixit02 commented on 2020-08-09T05:03:07Z
----------------------------------------------------------------

It would be better to delete this cell


arnaudmgh commented on 2020-08-09T11:43:02Z
----------------------------------------------------------------

I'll do that last, because I am not sure how well the notebook will work if I don't have it at all

Vaibhavdixit02 commented on 2020-08-09T11:53:23Z
----------------------------------------------------------------

That's okay. Just delete the cell after running it, and then save and push, I think that should be fine?

Though not very sure, notebooks get broken very fast. Do you think you could run the conversion to markdown as well? https://github.com/TuringLang/TuringTutorials/blob/master/render-example.bash

arnaudmgh commented on 2020-08-09T15:11:35Z
----------------------------------------------------------------

OK I've done thaton my fork - hopefully it shows on the pull request?

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

Vaibhavdixit02 commented on 2020-08-09T05:03:08Z
----------------------------------------------------------------

Curious, why turn off the progress bars?


arnaudmgh commented on 2020-08-09T11:42:17Z
----------------------------------------------------------------

Oh this was there before I edited. I think it's because it create one line of output every time the progress bar changes, making big outputs.

Vaibhavdixit02 commented on 2020-08-09T15:01:25Z
----------------------------------------------------------------

Ah yes, okay

@Vaibhavdixit02
Copy link
Contributor

@ChrisRackauckas any comments?

Copy link
Contributor Author

Oh this was there before I edited. I think it's because it create one line of output every time the progress bar changes, making big outputs.


View entire conversation on ReviewNB

Copy link
Contributor Author

I'll do that last, because I am not sure how well the notebook will work if I don't have it at all


View entire conversation on ReviewNB

Copy link
Contributor

Vaibhavdixit02 commented Aug 9, 2020

That's okay. Just delete the cell after running it, and then save and push, I think that should be fine?

Though not very sure, notebooks get broken very fast. Do you think you could run the conversion to markdown as well? https://github.com/TuringLang/TuringTutorials/blob/master/render-example.bash

cc: @cpfiffer

View entire conversation on ReviewNB

Copy link
Contributor Author

Note that the first time I run the last version with Random.seed!(12);, The first chain (out of 3) was bad, i.e. had inferences way off the true parameters. I assume it's just bad luck of the state starting in a bad place in the parameter space - just recording here that it can happen.

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:37Z
----------------------------------------------------------------

Maybe just add dx and dy as a comment instead of defining a variable that is never used?


arnaudmgh commented on 2020-08-09T16:19:30Z
----------------------------------------------------------------

Will do

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:38Z
----------------------------------------------------------------

Maybe add a prob argument to the model with a template ODE problem and just use remake inside of the model definition? I would assume that this is (slightly?) more performant and improves type stability (IIRC there was an issue in the previous version of the notebook).


arnaudmgh commented on 2020-08-09T16:43:22Z
----------------------------------------------------------------

OK, done

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:39Z
----------------------------------------------------------------

IMO it would be better and more instructive to retrieve the parameters by names. Users shouldn't work with chain.value directly.


arnaudmgh commented on 2020-08-09T16:37:37Z
----------------------------------------------------------------

What syntax do you propose?

devmotion commented on 2020-08-09T20:13:07Z
----------------------------------------------------------------

Similar to ODESolutions, chains can be converted to Arrays by calling Array(chain). I guess here you might want to use Array(chain[[:alpha, :beta, :gamma, :delta]], append_chains=false)

(using the correct unicode characters; strings should work as well). Then you get a threedimensional array of dimensions samples x parameters x chains that you can loop over.

arnaudmgh commented on 2020-08-10T03:22:43Z
----------------------------------------------------------------

A problem I had with this notation is that

Array(chain[[:δ, :γ, :α, :β]], append_chains=false) == Array(chain[[:α, :β, :γ, :δ]], append_chains=false)

is true!

It seems to return the variables in the alphabetical order no matter the order we specify.

So if in the model function the parameters are not assigned in the alphabetical order, the results will be wrong (which is the case in the current tutorial by the way).

This is not the case when using integers (i.e. integer indexing works as expected), so that's why I used them.


devmotion commented on 2020-08-10T06:14:07Z
----------------------------------------------------------------

I see, there's apparently a bug in AxisArrays (https://github.com/JuliaArrays/AxisArrays.jl/issues/182) which causes this behaviour. I still think that it would be better to use strings or symbols, but then we should also add a comment (just in the code) about this behaviour. IMO it's impossible to understand the code if using indices, and it will also break (silently) if you use a different inference method or Turing changes the internal variables or statistics for the NUTS sampler.

BTW I guess we should also just write size(chain, 1) instead of size(chain.value, 1) (or alternatively just sample(chain, 300) to obtain subsets of the chain - note that, same as the current implementation, this will by default use sampling with replacement).

arnaudmgh commented on 2020-08-13T05:03:02Z
----------------------------------------------------------------

Thanks for the comments David, I incorporated these modifications in commit 663ca56. I used Array(...) and size(chain, 1), but I did not use sample(chain, 300) in the "for" statement because it's not iterable.

@review-notebook-app
Copy link

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:39Z
----------------------------------------------------------------

IMO the code is a bit difficult to read, in particular the definition of the priors. Maybe split it over multiple lines?


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:40Z
----------------------------------------------------------------

Again maybe use remake.


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:41Z
----------------------------------------------------------------

Here as well.


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:41Z
----------------------------------------------------------------

It's a bit strange to use _h as function name. One could also define the history function for single indices to avoid creating arrays (since we only need the value for x).


Vaibhavdixit02 commented on 2020-08-13T10:27:08Z
----------------------------------------------------------------

Looks like the below definition doesn't work? I suspect the history function is expected to be a vector matching the dependent variable's length?

function delay_lotka_volterra(du,u,h,p,t)
    x, y = u
    α,β,γ,δ = p
    du[1] = α*h(p,t-1) - β*x*y
    du[2] = -γ*y + δ*x*y
end

p = (1.5,1.0,3.0,1.0); u0 = [1.0;1.0]
tspan = (0.0,10.0)
_h(p,t) = 1.0

Vaibhavdixit02 commented on 2020-08-13T10:27:31Z
----------------------------------------------------------------

This works

function delay_lotka_volterra(du,u,h,p,t)
    x, y = u
    α,β,γ,δ = p
    du[1] = α*h(p,t-1)[1] - β*x*y
    du[2] = -γ*y + δ*x*y
end

p = (1.5,1.0,3.0,1.0); u0 = [1.0;1.0]
tspan = (0.0,10.0)
_h(p,t) = 1.0

devmotion commented on 2020-08-13T17:05:20Z
----------------------------------------------------------------

Looks like the below definition doesn't work? I suspect the history function is expected to be a vector matching the dependent variable's length?

Yes, and hence the version in the last comment shouldn't be used either (the initial history function should return a vector as well). During the numerical integration process the solver will call delay_lotka_volterra with a history function h that is not the initial history function but a function that allows to interpolate the current solution at all previous time points.

Hence one should use

function delay_lotka_volterra(du, u, h, p, t)
   x, y = u
   α, β, γ, δ = p
   du[1] = α * h(p, t-1)[1] - β * x * y
   du[2] = -γ * y + δ * x * y
   return
end

p = (1.5,1.0,3.0,1.0)
u0 = [1.0; 1.0]
tspan = (0.0,10.0)
h(p, t) = ones(2)
prob1 = DDEProblem(delay_lotka_volterra, u0, h, tspan, p)

or to save allocations (as suggested above)

function delay_lotka_volterra(du, u, h, p, t)
   x, y = u
   α, β, γ, δ = p
   du[1] = α * h(p, t-1; idxs=1) - β * x * y
   du[2] = -γ * y + δ * x * y
   return
end

p = (1.5,1.0,3.0,1.0)
u0 = [1.0; 1.0]
tspan = (0.0,10.0)
h(p, t; idxs::Int) = 1.0
prob1 = DDEProblem(delay_lotka_volterra, u0, h, tspan, p)

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:42Z
----------------------------------------------------------------

Some comments

  • Again one could use remake and an additional prob argument
  • The type parameter T is neither used nor needed, and hence the argument ::Type{T} and the where clause can be removed.
  • The code that is currently commented out should be used for data[:,i]~... should be used instead of the current implementation. It will dispatch to a more efficient implementation in DistributionsAD and avoids the creation of an array (BTW sigma[1] seems wrong anyway since it's a scalar parameter).

@review-notebook-app
Copy link

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:43Z
----------------------------------------------------------------

This is not correct, it seems you don't use multithreading in the code below.


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-09T13:36:43Z
----------------------------------------------------------------

Again it would be better to not use chain.value directly.


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 9, 2020

View / edit / reply to this conversation on ReviewNB

ChrisRackauckas commented on 2020-08-09T14:48:03Z
----------------------------------------------------------------

Should we just remove this section?


Vaibhavdixit02 commented on 2020-08-09T15:05:11Z
----------------------------------------------------------------

Not a 100% sure, it might be good to get the benchmarks updated separately and remove it from here.

arnaudmgh commented on 2020-08-09T16:04:30Z
----------------------------------------------------------------

I kind of agree that it will flow better without this section here; for an intro, it's pretty awesome to see you can just use DiffEq problems and solver seamlessly in the Turing model, DiffEqBayes is a bit of a distraction from the aim, learning Turing with differential equation models. (I am correct in the text that what DiffEq bayes is adding is that it can interface with STAN.jl and other packages?). Perhaps a quick reference to the package would be enough?

In the same vein, should we may-be reorder as

  • ODE
  • DDE
  • Performance improvement sensitivity based algorithms?

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 14, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-14T10:39:52Z
----------------------------------------------------------------

Should probably be removed?


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 14, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-14T10:39:52Z
----------------------------------------------------------------

Same here, prob1 should be an argument of the model or made constant.


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 14, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-14T10:39:53Z
----------------------------------------------------------------

Maybe be consistent and use the same way for defining the problem as above?


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 14, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-14T10:39:54Z
----------------------------------------------------------------

Same here, maybe be consistent?


@review-notebook-app
Copy link

review-notebook-app bot commented Aug 14, 2020

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-14T10:39:55Z
----------------------------------------------------------------

Same here?


@devmotion
Copy link
Member

Can we maybe change the logging level for this notebook to avoid all the warnings from AdvancedHMC while tuning the step size?

Copy link
Contributor

I couldn't get it to work from the render-example.bash script's nbconvert call. Can you give me what additional arguments I should pass in to

env/bin/jupyter-nbconvert "$filename" --to notebook --inplace --execute --ExecutePreprocessor.timeout=-1 --ExecutePreprocessor.startup_timeout=120

View entire conversation on ReviewNB

Copy link
Member

Did you run the whole script? I guess there could be issues with jupyter-nbconvert (but then it would be strange that no other tutorial had to include these specific calls of Pkg.activate), but I'm pretty sure that the default behaviour of IJulia is to activate projects in the current directory if you open a notebook.


View entire conversation on ReviewNB

Copy link
Contributor

I ran the required parts, I had to modify the script to suit my system so it isn't identical. The above line had a hardcoded kernel name which I removed to allow me to run it. I am not sure of the interaction between Julia environment and the Jupyter notebook kernel and if that can be ensured while running this. If you have any suggestions I can include it? Maybe we should ask Cameron?


View entire conversation on ReviewNB

Copy link
Member

Maybe the problem is that you use a custom kernel that doesn't include the flag --project=@.? The default kernel that is installed by IJulia includes this flag (https://github.com/JuliaLang/IJulia.jl/pull/820). And yes, I guess Cameron (who probably usually renders the notebooks) might have a slightly different setup on his computer. I guess ideally one should use CI for rendering and rerunning the tutorials to ensure reproducibility.


View entire conversation on ReviewNB

@cpfiffer
Copy link
Member

Yeah, these should ultimately be run by CI, but I haven't gotten around to setting it up yet. For the moment, I will rerun the markdown file on my machine.

I think the only remaining things to address here are @devmotion's comments above.

@Vaibhavdixit02
Copy link
Contributor

Thanks for addressing the review @arnaudmgh. cc: @cpfiffer

@review-notebook-app
Copy link

review-notebook-app bot commented Aug 16, 2020

View / edit / reply to this conversation on ReviewNB

cpfiffer commented on 2020-08-16T16:48:52Z
----------------------------------------------------------------

This should be progress = false -- we don't want to render the progress meter for tutorials.

Additionally, it would be nice if we changed the logging level here to hide all the AdvancedHMC warnings. Same for the code under the "Direct Handling of Bayesian Estimation with Turing" header.

You can do this by defining a new logger:

using Logging

logger = SimpleLogger(stdout, Logging.Error)

with_logger(logger) do
chain2 = sample(model2, NUTS(.45), MCMCThreads(), 5000, 3)
end



arnaudmgh commented on 2020-08-20T04:38:03Z
----------------------------------------------------------------

Agreed - just to be sure, do we have to put that on every call to sample()? (it's a bit verbose, but OK)

devmotion commented on 2020-08-20T06:54:04Z
----------------------------------------------------------------

You could also use global_logger(logger) (https://docs.julialang.org/en/v1/stdlib/Logging/#Logging.global_logger), I guess.

cpfiffer commented on 2020-08-20T13:54:47Z
----------------------------------------------------------------

I like that better. Just stick this in the setup block at the top of the notebook:

using Logging

Set a logger to catch AdvancedHMC warnings.

logger = SimpleLogger(stdout, Logging.Error)
global_logger(logger)

@Vaibhavdixit02
Copy link
Contributor

Bump @arnaudmgh ?

Copy link
Contributor Author

Agreed - just to be sure, do we have to put that on every call to sample()? (it's a bit verbose, but OK)


View entire conversation on ReviewNB

Copy link
Member

You could also use global_logger(logger) (https://docs.julialang.org/en/v1/stdlib/Logging/#Logging.global_logger), I guess.


View entire conversation on ReviewNB

Copy link
Member

I like that better. Just stick this in the setup block at the top of the notebook:

using Logging

Set a logger to catch AdvancedHMC warnings.

logger = SimpleLogger(stdout, Logging.Error)
global_logger(logger)


View entire conversation on ReviewNB

@review-notebook-app
Copy link

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-21T15:19:51Z
----------------------------------------------------------------

I still think this cell should be removed. The standard IJulia kernel activates the project in the current directory automatically when opening a notebook, it's not part of any other notebooks, and export seems to work fine for Cameron.


@review-notebook-app
Copy link

View / edit / reply to this conversation on ReviewNB

devmotion commented on 2020-08-21T15:19:52Z
----------------------------------------------------------------

Actually, probably it is a lot simpler (see https://github.com/JuliaLang/julia/pull/33807) and you can just write

# Do not show AdvancedHMC warnings.
using Logging
Logging.disable_logging(Logging.Warn)

@cpfiffer
Copy link
Member

Okay, cool. Thanks! Anyone have anything else they want to add to this one?

Copy link
Member

@devmotion devmotion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cpfiffer cpfiffer merged commit adf06d9 into TuringLang:master Aug 22, 2020
@Vaibhavdixit02
Copy link
Contributor

@cpfiffer just fyi the markdown conversion would have to be done again, the markdown in this PR is now outdated and wasn't updated

@cpfiffer
Copy link
Member

On it, thanks!

cameronraysmith pushed a commit to cameronraysmith/TuringTutorials that referenced this pull request Jan 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants