Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The kld loss in UPDATE GENERATOR process #6

Open
MapleSpirit opened this issue Dec 11, 2018 · 2 comments
Open

The kld loss in UPDATE GENERATOR process #6

MapleSpirit opened this issue Dec 11, 2018 · 2 comments

Comments

@MapleSpirit
Copy link

MapleSpirit commented Dec 11, 2018

I noticed that you use the kld = torch.mean(-z_log_stddev + 0.5 * (torch.exp(2 * z_log_stddev) + torch.pow(z_mean, 2) - 1)) in UPDATE GENERATOR . But I don't understand that why you chose this as a part of your loss function. And it seems that it was not mentioned in the original paper. Could you please tell me its intention here?
And apart from this, the z_log_stddev and z_mean are just got from two different Linear+LeakyReLU layer. Emmm... why did you use the Linear+LeakyReLU layer rather than calculate mean and std directly?

Thanks for your help~

@woozzu
Copy link
Owner

woozzu commented Dec 11, 2018

@MapleSpirit It is actually mentioned in the original paper. The authors used the term 'text embedding augmentation', which is same as the conditioning augmentation in StackGAN. Also, I personally got a part of the original implementation from the authors. Therefore, I have no opinion about this specific implementation choice.

@MapleSpirit
Copy link
Author

OK~ I'll go to see the contents in StackGAN. Thanks again~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants