Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dimension of the output of the discriminator #2

Closed
ghost opened this issue Sep 15, 2017 · 1 comment
Closed

The dimension of the output of the discriminator #2

ghost opened this issue Sep 15, 2017 · 1 comment

Comments

@ghost
Copy link

ghost commented Sep 15, 2017

Hello!

I find the dimension of the output of the discriminator is # h4 is (32 x 32 x 1), and then the code calculate the loss :

a2b_dis = models.discriminator(a2b, 'b', reuse=True)
# losses
g_loss_a2b = tf.identity(ops.l2_loss(a2b_dis, tf.ones_like(a2b_dis)), name='g_loss_a2b')

I am so confused, as I think the the dimension of the output of the discriminator should be 1.

Could you please give some hints?

THX

@LynnHo
Copy link
Owner

LynnHo commented Sep 16, 2017

@Simon4john The author's pytorch code uses this strategy. https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix. I don't know exactly why, but I think it's used for more stable training comparing to one dimension output.

@ghost ghost closed this as completed Oct 9, 2017
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant