Skip to content

Conversation

devmotion
Copy link
Member

This removes Turing-specific parts from the SMC and PG algorithms. The (quite limited) tests copied from Turing pass with a callable model but honestly there might very well be some problems with how these rudimentary models and algorithms handle samples and how they are reset.

Some things I noticed:

  • One of the main problems/question is how to return samples/trajectories and if/how to provide a special model structure (so far one has to implement a custom model)
  • As mentioned before, currently models (also in Turing) only use the global RNG. This is problematic for parallel sampling or just whenever someone passes a custom RNG to sample, hence it might be good to instead implement models as function model(rng::AbstractRNG) ... end and to make the RNG explicit in Trace. I assume that any in general it would be easier to replay trajectories if particles/traces would save the unmutated RNG with which the model was called.

@coveralls
Copy link

coveralls commented Dec 9, 2020

Pull Request Test Coverage Report for Build 719106798

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 36 of 37 (97.3%) changed or added relevant lines in 3 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+30.3%) to 56.198%

Changes Missing Coverage Covered Lines Changed/Added Lines %
src/smc.jl 30 31 96.77%
Totals Coverage Status
Change from base Build 402690246: 30.3%
Covered Lines: 204
Relevant Lines: 363

💛 - Coveralls

@codecov
Copy link

codecov bot commented Dec 9, 2020

Codecov Report

Merging #20 (87b1d2f) into master (487a943) will increase coverage by 30.25%.
The diff coverage is 97.36%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master      #20       +/-   ##
===========================================
+ Coverage   25.94%   56.19%   +30.25%     
===========================================
  Files           5        5               
  Lines         424      363       -61     
===========================================
+ Hits          110      204       +94     
+ Misses        314      159      -155     
Impacted Files Coverage Δ
src/smc.jl 97.22% <96.87%> (+97.22%) ⬆️
src/container.jl 91.80% <100.00%> (+29.07%) ⬆️
src/model.jl 100.00% <100.00%> (ø)
src/resampling.jl 96.55% <0.00%> (+21.08%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 487a943...87b1d2f. Read the comment docs.

@cpfiffer
Copy link
Member

cpfiffer commented Dec 9, 2020

* One of the main problems/question is how to return samples/trajectories and if/how to provide a special model structure (so far one has to implement a custom model)

I don't have a good enough sense at the moment for some kind of model type. My sense is that it doesn't seem to be needed right now -- adding typing wouldn't help too much. Maybe down the road if AbstractPPL becomes a thing we can link into that.

* As mentioned before, currently models (also in Turing) only use the global RNG. This is problematic for parallel sampling or just whenever someone passes a custom RNG to `sample`, hence it might be good to instead implement models as `function model(rng::AbstractRNG) ... end` and to make the RNG explicit in `Trace`. I assume that any in general it would be easier to replay trajectories if particles/traces would save the unmutated RNG with which the model was called.

Yeah, that sounds like a bug to me. Including the RNG in the Trace sounds good to me, though I wonder how big they are? Is it too onerous to include them in every trace?

Copy link
Member

@cpfiffer cpfiffer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks perfectly fine to me. I had some discussion items above, but I don't think they're core to this PR and can probably be ignored for the moment.

Copy link
Member

@yebai yebai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @devmotion, this looks good overall. I left a few minor comments below. The tests look reasonable to me, but I agree we can add a few more tests in future PRs.

@test getspace(s) === (:x, :y)
function (m::NormalModel)()
# First latent variable.
m.a = a = rand(Normal(4, 5))
Copy link
Member

@yebai yebai Dec 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe consider passing rng into the model? E.g., function (m::NormalModel; rng = Random.GLOBAL_RNG)() ... end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this in my local branch when I realized that currently RNGs are not handled corrected by Trace. However, I reverted this change when I realized that this would require significant changes to Trace.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See also #21 (comment).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to address this in a separate PR (#23)

test/smc.jl Outdated
@test s.resampler == ResampleWithESSThreshold()
@test getspace(s) === (:x, :y)
# First observation.
produce(logpdf(Normal(a, 2), 3))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe consider defining a helper function observe(dist, x) = produce(dist, x)) so that we don't need to explicitly export the produce API from libtask?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@devmotion
Copy link
Member Author

Tests fail since Libtask_jll is not compatible with Julia 1.6 yet (more information can be found in the discussion in TuringLang/Turing.jl#1554).

Maybe we should just run tests with Julia 1.5 until it is fixed?

devmotion and others added 2 commits April 5, 2021 13:46
Co-authored-by: Hong Ge <hg344@cam.ac.uk>
@test getspace(s) === (:x, :y)
function (m::NormalModel)()
# First latent variable.
m.a = a = rand(Normal(4, 5))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to address this in a separate PR (#23)

@yebai yebai merged commit 391c236 into master Apr 26, 2021
@yebai yebai deleted the smc_pg branch April 26, 2021 20:28
@yebai
Copy link
Member

yebai commented Apr 26, 2021

Many thanks @devmotion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants