Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get logp from SGMCMC #123

Closed
rzu512 opened this issue Jul 6, 2019 · 2 comments
Closed

Get logp from SGMCMC #123

rzu512 opened this issue Jul 6, 2019 · 2 comments
Assignees

Comments

@rzu512
Copy link

rzu512 commented Jul 6, 2019

The HMC class allows accessing the log-joint probability before and after the update step through the log_prob and orig_log_prob attributes of the HMCInfo class. However, the stochastic gradient MCMC classes in zhusuan don't expose these two log-joint probabilities.

Is it possible to get a tensorflow operation for the log-joint probability at the end of the update step in SGMCMC? Or alternatively Is it possible to get a tensorflow operation for the value of the parameters before the update step?

@csy530216
Copy link
Collaborator

Hello, I think it is not difficult to get the value of the parameters as long as you perform something like sess.run(latent). About the log-joint probability, since in SGMCMC algorithm we do not need to compute it (unlike in HMC, since we need to perform the Metropolis-Hastings step there), by default the log_prob is not returned (to reduce the size of computation graph).

If you want to get log_prob given the value of parameters stored in a dictionary latent, you could add some codes in your script as follows:

bn = model.observe(**merge_dicts(latent, observed))
log_prob = bn.log_joint()

@rzu512
Copy link
Author

rzu512 commented Jul 15, 2019

In the end, I modify my joint_prob function and the SGHMC class to store the logp and parameters at the half step. For my case, I only need to do one additional sum to get the logp after calculating the gradient. So it make sense to calculate the gradient and logp at the same time t.

I use SGHMC to fit 1000 chains with the approximate logp and gradient. For every n steps, I calculate the accurate logp for the approximately best chain.

@rzu512 rzu512 closed this as completed Jul 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants