Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why are we sampling z from the standard normal distribution instead of the learned p(z)? #23

Closed
icbcbicc opened this issue Jan 22, 2021 · 2 comments

Comments

@icbcbicc
Copy link

Hi rosinality,

Your code is clean and easy to read. Thank you for your effort.

I have one question: During the sampling process, why are we sampling z from the standard normal distribution (with temperature)? Shouldn't we sample from the learned p(z)? Is it because p(z) is dependant on the data so that we cannot sample from it? (In the implementation, if I'm understanding it correctly, p(z) has four components, three of them are dependent on both the data and the model, while the last one is only dependant on the model.)

Thanks.

@rosinality
Copy link
Owner

Learned priors used in the sampling step. Please refer to this:

if self.split:

@icbcbicc
Copy link
Author

Thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants