You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Nice work! I have several questions about your paper:
what' the detail setting about GAN and cGAN in Table3 and Figure 4. For cGAN, is the number of classess is 1000? what's the backbone of these two methods. Are they use all the imagenet images? is the pretrained model you released name "baseline" and "cgan" denotes "gan" and "cgan" in this table?
How many images you used to calculate FID?(5k or 50k) Why the cGAN results much worse than Biggan, FID 35.14 compared with 7.4 reported in bigGAN. It's not even comparable but the visualization results seems good in your paper. How to explain it, is it because your diversity is much worse than BigGAN? Or some other explaination?
How do you get the Logo-GAN results in Table 3? Did you re-implementate it? I could not find the results in their paper. Why do you think your results is a little bit worse than theirs?
What you mean about "random labels" in table 3.
Thank you so much! I really appreciate your work.
The text was updated successfully, but these errors were encountered:
All GANs we train on ImageNet use all the images of all 1000 classes. The backbone architecture is given by a ResNet backbone. And indeed, "baseline", where we train a vanilla GAN, corresponds to "gan".
We use 50k images to calculate FID in all experiments. There could be several reasons our FID numbers are lower, since BigGAN uses a much larger architecture, and different discriminator regularization. This might result in lower diversity, which as you suggest could lead to the FID drop.
We reimplemented Logo-GAN, and the details can be found in our appendix. Logo-GAN RC has indirect access to real labels by clustering a pretrained classifier, which could lead to better performance. On the other hand, Logo-GAN AE, which we outperform, is fully unsupervised.
Random labels is when we fix a random labelling of the images, and train a GAN on the fixed labelling.
Nice work! I have several questions about your paper:
Thank you so much! I really appreciate your work.
The text was updated successfully, but these errors were encountered: