Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update script.jl #79

Merged
merged 3 commits into from Sep 17, 2023
Merged

Update script.jl #79

merged 3 commits into from Sep 17, 2023

Conversation

mattiasvillani
Copy link
Contributor

@mattiasvillani mattiasvillani commented Sep 4, 2023

Fixed some typos in the model description (the maths) and one in the code (variance was used instead of standard deviation in measurement model). Added PGAS sampling at the end to show that it solves the degeneracy problem, which should close the issue #77

Please check my text 'To use this sampler we need to define the transition and observation densities as well as the initial distribution in the following way:' as I am not sure that this is needed only for PGAS (since it was not needed for PG).

Fixed some typos in the model and one in the code (variance was used instead of standard deviation in measurement model).
Added PGAS sampling at the end to show that it solves the degeneracy problem, which should close the issue TuringLang#77
@codecov
Copy link

codecov bot commented Sep 4, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (d785ce5) 96.31% compared to head (2f9b66d) 96.32%.

❗ Current head 2f9b66d differs from pull request most recent head fb2203b. Consider uploading reports for the commit fb2203b to get more accurate results

Additional details and impacted files
@@           Coverage Diff           @@
##           master      #79   +/-   ##
=======================================
  Coverage   96.31%   96.32%           
=======================================
  Files           6        7    +1     
  Lines         380      381    +1     
=======================================
+ Hits          366      367    +1     
  Misses         14       14           

see 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -88,8 +86,8 @@ end

# Here we use the particle gibbs kernel without adaptive resampling.
model = NonLinearTimeSeries(θ₀)
pgas = AdvancedPS.PG(Nₚ, 1.0)
chains = sample(rng, model, pgas, Nₛ; progress=false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment, but why removing the rng argument ? That makes the documentation reproducible (to some extent, rng implementations can change with julia versions)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, removed that when trying to get pgas to work. I edited the line now so that it is back to using rng as first argument, and also progress=false as in the original example.

@FredericWantiez
Copy link
Member

The PGAS part looks correct to me, we only need to define the densities for PGAS since the sampler targets models with a special structure (SSM). PG uses Libtask in the background to allow for somewhat arbitrary models (through program tracing).

added back the rng argument when sampling with PG.
@YSanchezAraujo
Copy link

The PGAS part looks correct to me, we only need to define the densities for PGAS since the sampler targets models with a special structure (SSM). PG uses Libtask in the background to allow for somewhat arbitrary models (through program tracing).

Does that mean that more complicated / arbitrary models aren't compatible with PGAS ? Is a work around possible ? Happy to look into it if pointed in the right direction

@mattiasvillani
Copy link
Contributor Author

My understanding is the essentially arbitrary models are possible (I am currently working with a non-standard one and it works), but that you need to define the transition and observation densities as well as the initial distribution as done in my PR.

@YSanchezAraujo
Copy link

My understanding is the essentially arbitrary models are possible (I am currently working with a non-standard one and it works), but that you need to define the transition and observation densities as well as the initial distribution as done in my PR.

I think this is where some confusion would come in. The way it is shown here, those functions define some distribution that is used, but don't specify how they are used. In the old example, they are explicitly used along with rng to generate samples. This made it easy to reason about what one does when the transition (for example) involves sampling from multiple distributions, or consists of dependent steps. But less clear as to what one does in this case.

@mattiasvillani
Copy link
Contributor Author

mattiasvillani commented Sep 5, 2023

Note that the old example (using PG) is still there, I just added PGAS as a follow-up (as the old example ended rather disappointingly with degeneracy and an Issue asked for how to use PGAS here).
I think it would be nice to explain how these distributions are used (it is cloudy for me at the moment, I am now trying to get more into the code), but I note that essentially the same (lack of) explanation is in the other PGAS example: https://turinglang.org/AdvancedPS.jl/dev/examples/gaussian-ssm/.

@FredericWantiez
Copy link
Member

The documentation is rather sparse I agree, thanks for the feedback I'll spend some time improving it.

For the model part. We've only implemented PGAS for State Space Models since the ancestor sampling scheme is tractable for these [from the original article, eq. 17]:
image
where $f_{\theta}$ corresponds to the transition function in the code.
The initialization and observation functions are used to update the particles and compute the importance weights.

For PG, we don't need to impose any particular structure as long as we can store/regenerate the trajectories of the reference particle. That's why you can use PG for arbitrary models, you just need to emit the logpdf from the model, something like this in the code.

function (model::NonStandardModel)(rng::Random.AbstractRNG)
    logpdf = simulation_steps(rng, ...)
    Libtask.produce(logpdf)
end

Hopefully that helps. If you have an example that's causing confusion, happy to look into it.

@mattiasvillani
Copy link
Contributor Author

Oh, I see. The question was about models outside the state-space class, non-Markovian models. Thanks for the clarification. That restriction of PGAS should be mentioned in the docs where PGAS is introduced. Let me know if I should change anything here.

@FredericWantiez
Copy link
Member

FredericWantiez commented Sep 13, 2023

A small note, github does not deploy previews from forks. You can also work directly on the main repo and github would build a preview version of the PRs.

@FredericWantiez
Copy link
Member

A very small thing, but the rest looks fine to me. Thanks again for looking into this !

Co-authored-by: FredericWantiez <frederic.wantiez@gmail.com>
@yebai yebai merged commit 72a1e55 into TuringLang:master Sep 17, 2023
12 of 17 checks passed
@mattiasvillani mattiasvillani deleted the patch-2 branch September 17, 2023 20:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants