You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi:
Thanks for sharing your impressive repo, I found the loss_D in wan.py is : loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs)) ,but the wgan's paper described the loss_D as follow:
So is it a bug or other reasons?
The text was updated successfully, but these errors were encountered:
I have understood the loss_D,but in my personal opinion,the loss_D should be wrote as follow:
loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs)) -1
is it right ?
I believe that since they add the gradient to the weights during update they perform gradient ascent instead of descent, and because of this in order to do gradient descent you will have to multiply the gradient with minus one. Because of this the loss will be:
Hi:
Thanks for sharing your impressive repo, I found the loss_D in wan.py is :
loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs))
,but the wgan's paper described the loss_D as follow:So is it a bug or other reasons?
The text was updated successfully, but these errors were encountered: