You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training generator standard way is to pass [batch_generated_imgs, batch_real_imgs] and np.ones(shape=batch_size * 2) where all images are labelled 1 (real) to trick discriminator.
If I understand this trick correctly it is saying to pass [batch_generated_imgs, batch_real_imgs] and np.concat(np.ones(shape=batch_size), np.zeroes(shape=batch_size)) where labels are now flipped for fake and real?
The text was updated successfully, but these errors were encountered:
Trick 2 states that when training the generator, instead of minimising log(1-D(G(z))) you maximise log(D(G(z)) so that you get better gradients. This is because the discriminator usually performs better than the generator. This is most easily done when, for example in torch when using nn.CrossEntropyLoss, by assigning the synthetic samples the label 1 and training on that.
Hi @mjdietzx@soumith, could you explain more about this. Should we always flip the label when training the Generator (if I understand correctly, the Discriminator is fixed at the same time). Thus, trick 2 [batch_real_imgs, np.zeroes(shape=batch_size] is actually destroying the Discriminator?
https://github.com/soumith/ganhacks#2-a-modified-loss-function
When training generator standard way is to pass
[batch_generated_imgs, batch_real_imgs]
andnp.ones(shape=batch_size * 2)
where all images are labelled 1 (real) to trick discriminator.If I understand this trick correctly it is saying to pass
[batch_generated_imgs, batch_real_imgs]
andnp.concat(np.ones(shape=batch_size), np.zeroes(shape=batch_size))
where labels are now flipped for fake and real?The text was updated successfully, but these errors were encountered: