-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
recon_loss #15
Comments
It's cross entropy (i.e. neg. log likelihood of multinomial where categories are words), where What puzzles me a bit is that at the end of Algorithm 1, variational parameters and model parameters are updated separately but in the implementation they are updated jointly via regular backprop. |
Yeah, thanks for your help! I didn't notice it before. |
recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()" |
I've met the same prolbem. Have u solved it yet? |
Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!
The text was updated successfully, but these errors were encountered: