-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) ? #250
Comments
Loss from real and fake images are averaged. CMIIW, I guess this is how loss is calculated in the paper but if we just sum the loss then also we might get the same result. |
Discriminator use BCE loss:
Can you point out where it's specified in the paper? |
I found this comment in pix2pix paper Actually I have tested it without 0.5 on simple dataset like parabola from here and it still works. |
Actually I was not able to break it even with |
Multiplying loss with a constant will have same effect as that of a learning rate. While training a GAN we two try that both generator and discriminator learns at the same pace. |
I mean as I understand in Keras if you want to apply weight to loss you should use
but if you already get metrics, you just multiply them by constant here https://github.com/eriklindernoren/Keras-GAN/blob/master/gan/gan.py#L123 and it's not affecting the training process, I mean this multiplication is not 'part of the graph' like for example in tensorflow. |
I come here for the same question. |
I wonder why it's
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
and notd_loss = np.add(d_loss_real, d_loss_fake)
?https://github.com/eriklindernoren/Keras-GAN/blob/master/gan/gan.py#L123
The text was updated successfully, but these errors were encountered: