New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shouldn't it be D_real.backward(one)? #9
Comments
No, D_real is what we want to maximize, so we minimize the loss (-D_real) |
Thanks for your reply. That makes sense, but why does author of Wgan do the opposite in |
Maybe, the output of net_d is the loss or error in the implementation of wgan. It is up to the definition of net_d |
Hello, Thanks for the explaining. I have a question, since you backward through the network twice, why is retain_variable=True not used in the code? And why not directly use |
I use |
@ypxie @caogang see the WGAN author's comment in this issue martinarjovsky/WassersteinGAN#9 the two approaches are equivalent as long as you are consistent. |
The text was updated successfully, but these errors were encountered: