Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss_D about wgan #17

Closed
jeejeelee opened this issue Jun 13, 2018 · 3 comments
Closed

loss_D about wgan #17

jeejeelee opened this issue Jun 13, 2018 · 3 comments

Comments

@jeejeelee
Copy link

Hi:
Thanks for sharing your impressive repo, I found the loss_D in wan.py is :
loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs)) ,but the wgan's paper described the loss_D as follow:
image

So is it a bug or other reasons?

@jeejeelee
Copy link
Author

I have understood the loss_D,but in my personal opinion,the loss_D should be wrote as follow:
loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs)) -1
is it right ?

@eriklindernoren
Copy link
Owner

eriklindernoren commented Jun 14, 2018

I believe that since they add the gradient to the weights during update they perform gradient ascent instead of descent, and because of this in order to do gradient descent you will have to multiply the gradient with minus one. Because of this the loss will be:

-1 * (torch.mean(discriminator(real_imgs)) - torch.mean(discriminator(fake_imgs))) = 
-torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs))

@jeejeelee
Copy link
Author

@eriklindernoren thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants