Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel mcmc #648

Merged
merged 3 commits into from
Feb 16, 2022
Merged

Parallel mcmc #648

merged 3 commits into from
Feb 16, 2022

Conversation

janfb
Copy link
Contributor

@janfb janfb commented Feb 16, 2022

Not sure why we did not do this before, but it seems rather straight forward to parallelize slice_np:

  • for the serial version we just use one worker per chain
  • for the vectrorized version we give a batch of chains to the workers

Similarly, we can parallize the initialization with SIR which can take quite long with a large numbers of chains and a slow potential function.

This PR adds the num_workers argument to MCMCPosterior and adds a function that performs slice_np parallelized over workers.

We have another refactor of the slice samplers waiting in #607 but we can integrate that later I find.

Still todo:

  • add documentation for new functions
  • fix pbars

@janfb janfb added the enhancement New feature or request label Feb 16, 2022
@janfb janfb self-assigned this Feb 16, 2022
Copy link
Contributor

@jan-matthis jan-matthis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! I've left a few comments after a quick first pass over it.

As for "why we did not do this before": Last time I was concerned with the efficiency of the slice sampler implementation was for the benchmark in which all runs were limited to a single core. Vectorizing it improved speed up to 36-fold which was more than enough. Having said that, the ability to additionally parallelize init and sampling across cores is a great addition!

sbi/inference/posteriors/mcmc_posterior.py Outdated Show resolved Hide resolved
sbi/inference/posteriors/mcmc_posterior.py Show resolved Hide resolved
sbi/inference/posteriors/mcmc_posterior.py Outdated Show resolved Hide resolved
sbi/inference/posteriors/mcmc_posterior.py Show resolved Hide resolved
sbi/samplers/mcmc/slice_numpy.py Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

Codecov Report

Merging #648 (f9ecde9) into main (78cc11e) will increase coverage by 0.22%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #648      +/-   ##
==========================================
+ Coverage   74.81%   75.03%   +0.22%     
==========================================
  Files          75       74       -1     
  Lines        5667     5673       +6     
==========================================
+ Hits         4240     4257      +17     
+ Misses       1427     1416      -11     
Flag Coverage Δ
unittests 75.03% <100.00%> (+0.22%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
sbi/inference/posteriors/mcmc_posterior.py 70.99% <100.00%> (-2.25%) ⬇️
sbi/samplers/__init__.py 100.00% <100.00%> (ø)
sbi/samplers/mcmc/__init__.py 100.00% <100.00%> (ø)
sbi/samplers/mcmc/slice_numpy.py 95.73% <100.00%> (+17.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 78cc11e...f9ecde9. Read the comment docs.

Copy link
Contributor

@michaeldeistler michaeldeistler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this! I'm fine with merging this once JML's comments are addressed

@jan-matthis
Copy link
Contributor

Cheers!

@janfb janfb deleted the parallel-mcmc branch February 16, 2022 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants