Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

recon_loss #15

Open
yilunzhao opened this issue Apr 21, 2020 · 4 comments
Open

recon_loss #15

yilunzhao opened this issue Apr 21, 2020 · 4 comments

Comments

@yilunzhao
Copy link

yilunzhao commented Apr 21, 2020

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

@gokceneraslan
Copy link

It's cross entropy (i.e. neg. log likelihood of multinomial where categories are words), where preds are already log transformed in the decode function. It's implicitly given in the first part of Equation 7, but not explicitly. It is also hidden in the Estimate the ELBO and its gradient (backprop.) part of Algorithm 1.

What puzzles me a bit is that at the end of Algorithm 1, variational parameters and model parameters are updated separately but in the implementation they are updated jointly via regular backprop.

@yilunzhao
Copy link
Author

Yeah, thanks for your help! I didn't notice it before.
What's difference between updating jointly and seperately via regular backprop?

@NonBee98
Copy link

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"

@tsWen0309
Copy link

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"

I've met the same prolbem. Have u solved it yet?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants