Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why the loss in wgan.py is different with the original paper? #10

Open
guojting opened this issue Oct 9, 2018 · 2 comments
Open

why the loss in wgan.py is different with the original paper? #10

guojting opened this issue Oct 9, 2018 · 2 comments

Comments

@guojting
Copy link

guojting commented Oct 9, 2018

It makes me confused that which one is correct?
As implemented in wgan.py, we have
self.g_loss = tf.reduce_mean(self.d_)
self.d_loss = tf.reduce_mean(self.d) - tf.reduce_mean(self.d_)
however, according to the original paper of wgan, it seems that we should minimize (-1)*self.g_loss, instead of self.g_loss. Could you tell me why the losses are implemented in the above form? Anyway, it seems that using the implementation in wgan.py or wgan_v2.py, I can still get some results. This makes me more confused.

How about the losses as follows
self.g_loss = tf.reduce_mean(tf.scalar_mul(-1,self.d_))
self.d_loss = tf.reduce_mean(self.d_) - tf.reduce_mean(self.d)
?

Thank you!

@guangyuanyu
Copy link

The same question

@Joey-Liu
Copy link

Joey-Liu commented Nov 8, 2018

I also have this question, Is this a mistake?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants