Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discriminator accuracy: 0.5 on average, but collapsed into 0 for negative samples and 1 for positive samples #44

Closed
danielegrattarola opened this issue Jul 9, 2018 · 2 comments

Comments

@danielegrattarola
Copy link

I am monitoring the discriminator accuracy on separate batches for positive and negative samples as suggested in trick 4, but it often occurs the following situation:

  • Average accuracy = 0.5 (average loss 0.69)
  • Acc on negative samples = 0.0 (loss 0.69)
  • Acc on positive samples = 1.0 (loss 0.7)

What could be the cause of this problem?

Cheers,
Daniele

@jkim-
Copy link

jkim- commented Jul 9, 2018

It might mean that the discriminator is not powerful enough as it always outputs 1.

Maybe try increasing its capacity?

@danielegrattarola
Copy link
Author

Thanks for the reply @jkim-
Turns out the discriminator was fine, but was simply producing values slightly higher than 0.5 for positive samples and slightly lower for negative ones. Since Keras automatically rounds predictions in order to compute accuracy, it was giving me that weird behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants