You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the following code (line 248-252 at solver.py), classification loss for fake images is not computed and hence backpropogated, is there any specific reason behind this or is it a possible bug?
# Compute loss with fake images.
x_fake = self.G(x_real, c_trg)
out_src, out_cls = self.D(x_fake.detach())
d_loss_fake = torch.mean(out_src)
Any help is highly appreciated.
The text was updated successfully, but these errors were encountered:
To my understanding it is computed at line 281 at solver.py. Classification loss for fake images is a part of the generator objective, not of the discriminator.
@segalon
Thanks for replying, I think I understand this now. In-fact, I even tried the model by adding classification loss for fake images for the discriminator, and found that the results deprecated. I reckon this happens because if we employ classification loss on fake images our ground truth label is often wrong, especially in the beginning when fake images have really poor quality.
According to the following code (line 248-252 at solver.py), classification loss for fake images is not computed and hence backpropogated, is there any specific reason behind this or is it a possible bug?
Any help is highly appreciated.
The text was updated successfully, but these errors were encountered: