Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Improper Priors #812

Closed
bhgomes opened this issue Jun 14, 2019 · 3 comments
Closed

Using Improper Priors #812

bhgomes opened this issue Jun 14, 2019 · 3 comments

Comments

@bhgomes
Copy link

bhgomes commented Jun 14, 2019

I am wondering how Turing deals or plans to deal with improper priors (especially non-normalizable priors like uniform over unbounded domains) in its estimation of the posterior distribution. I haven't seen anything in the documentation describing usage or best practices. In my attempts, I have not been able to get good estimates (naively) using improper priors, i.e. bad convergence, vanishing derivatives, ... etc.

This was referenced Jun 14, 2019
@yebai
Copy link
Member

yebai commented Jun 20, 2019

@bhgomes Thanks for the question. Please play with the flat distribution (#315 (comment)) and see whether it fits the need here. Feel free to post your findings here.

@bhgomes
Copy link
Author

bhgomes commented Jun 24, 2019

I've looked at Flat (and FlatPos) and my issue with them is that they sample a Uniform(0, 1) distribution (and claim to have [-Inf, Inf] coverage) which is not correct. In this case, since they are not normalizable, no quantile function exists. These kinds of "priors" should be interpreted as "don't sample from the prior just sample from the likelihood". I have yet to study the internals of the Turing samplers so I don't know how this would be done in practice, but at least in the case of Flat/FlatPos the situation is clear: just don't sample from the prior, but use the support of these "distributions" as constraints on your integration.

@xukai92
Copy link
Member

xukai92 commented Jul 17, 2019

You can override the rand for Flat to any coverage you want. This would help for particle based-samplers but for HMC it doesn't matter, as the rand funtion is hardly used (I say hardly because the only place you may use is for initilisation. But even for that Turing.jl use the Stan way for initialisation.).

So step back to the point of impropr priors. Flat is purely an example to show how to do that by youself. For HMC or MH, this kind of implementation is enough, as they only work on the unnormalised posterior (controlled by logpdf) but not even sampling (rand).

So what other impropr priors do you have in mind? Maybe we can look into the implementaion concretly.

@yebai yebai modified the milestones: 0.7, 0.8 Oct 3, 2019
@yebai yebai closed this as completed Dec 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants