Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summarizing network / learned summary statistics #298

Closed
cranmer opened this issue Aug 12, 2020 · 5 comments · Fixed by #299
Closed

Summarizing network / learned summary statistics #298

cranmer opened this issue Aug 12, 2020 · 5 comments · Fixed by #299
Assignees

Comments

@cranmer
Copy link

cranmer commented Aug 12, 2020

From the API documentation, it seems like the package is restricted to simulators that produce output x with a constant shape. I wondered about simulators that produce output of varying size and shape. Currently, it seems this would possible if one introduces a function that wraps the simulator to produce fixed length summary statistics. This was not immediately clear to me from the API documentation or the home page in the documentation.

I do see that the Hodgkin-Huxley example does exactly this with domain-motivated summary statistics (as opposed to learned summary statistics).

However, the software paper mentions

Moreover, if dimensionality reduction of the simulator output is desired, sbi can use a trainable summarizing network to extract relevant features from raw simulator output and spare the user manual feature engineering.

I can see how this wold work with a pre-trained network that produces summary statistics, but my reading of that sentence with the word "trainable" is that SBI can also train the summarizing network. I haven't found this yet in the API documentation. If it is possible, an example would be great. Either way it would be nice to clarify this in the documentation and/or reword the sentence in the software paper.

@dfm
Copy link

dfm commented Aug 12, 2020

Linking to openjournals/joss-reviews#2505 for posterity

@michaeldeistler
Copy link
Contributor

michaeldeistler commented Aug 13, 2020

Good point, thanks for raising this!

It is indeed possible to use a trainable network by passing an embedding_net to e.g. posterior_nn here. The embedding_net can be any trainable torch.nn.Module. It is also possible to pass such a network for SNRE.

This argument got lost from the the documentation page since we started using **kwargs here at some point. We will fix this in #299. The PR will also add a bit more explanation on the embedding_net.

However, there are not yet any pre-configured nets yet and no examples yet. Both of these things are on our hot desk, and are also documented in #279.

@cranmer
Copy link
Author

cranmer commented Aug 17, 2020

👋 I see that #299 is merged. I think the API documentation online is still the same. Once that API documentation is updated and exposes embedding_net I will approve the JOSS submission. The example can come later and is not needed for JOSS paper.

@cranmer
Copy link
Author

cranmer commented Aug 18, 2020

@michaeldeistler just want to make sure you see note above.

@michaeldeistler
Copy link
Contributor

michaeldeistler commented Aug 18, 2020

Thanks, saw it. We'll make a new PyPI release soon (hopefully tomorrow) and then update the docs :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants