Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple observations from simulator, difference between sbi package and sbibm #6

Closed
gsujan opened this issue Feb 8, 2021 · 3 comments

Comments

@gsujan
Copy link

gsujan commented Feb 8, 2021

Hi

As far i can tell, this package is built using the sbi package link
The sbi library currently does not seem to support multiple observations, i.e the simulator output should have batch size of 1. So generating time series data shouldn't be possible.

This is enforced in function check_for_possibly_batched_x_shape in
user_input_checks

In sbibm package, you have the example code with number of observations as an argument.
observation = task.get_observation(num_observation=1) # 10 per task

According to the sbi package, this shouldn't be possible. Did you use some workaround or am i misinterpreting something ?

@jan-matthis
Copy link
Contributor

Hi @gsujan,

the benchmark is performed for one observation at a time -- to get an estimate about the variance in performance for different observations and across repeats.

Best, Jan-Matthis

@gsujan
Copy link
Author

gsujan commented Feb 8, 2021

Thanks @jan-matthis for you quick reply.

Just out of curiosity, do you have an explanation on why single observation limitation exists in sbi. I am working on a problem where we want to use multiple real world observations to get a refined posterior. The multi round inference tutorial allows the posterior to be refined but only for single observation. Is there an variation of multi round inference where different observations can be used.

@jan-matthis
Copy link
Contributor

Sure!

It is certainly possible to extend sbi to a multiple observation setting. Is your data i.i.d.? If so, you could for example easily adopt the code for NRE or NLE -- you would just need to change the potential function used during MCMC sampling. This is for example discussed in the paper by Hermans et al. (2020) in section 5.3.2. Or alternatively, one could use the implementation of (S)NPE with exchangeable neural networks (see, e.g., Chan et al. 2018). I will close this issue but definitely feel free to open a new one over in the sbi repo to continue discussing. All of the above would be great extensions and I'd be glad to help out with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants