New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditioning Generator with label information #55
Comments
The Conditional batch norm does the trick:) Never knew that a small set of shift&scale parameters encodes class information. Kudos! |
So how do you force the generator to produce a certain class image at inference time? |
The batch-norm layer in the Generator is conditioned on the label information. Based on the label input at inference time, it chooses the corresponding params to produce a class-specific image from the given latent vector |
I understand that it makes the implementation lighter, especially when you want to reuse the same code for classic and conditional WGAN. |
The current code is conditioned in a discrete way, selecting single class only. Having said that, I came across some works on WGAN-GP in ICLR 2018- Projection Discriminator GAN, Spectral Normalization GAN, where the conditional batch-norm params of two classes are interpolated to produce a morphed image. you may use this technique to enforce a continuous conditioning at inference time. And even i didn't come across the comparative study of one-hot concat conditioning vs conditional batch norm. |
I had not come across those yet! Thanks a lot for the pointers. |
What looks weird is this conditional batch norm doesn't track statistics such as moving mean and moving variance when training. At inference, it normalize a batch of test samples using its empirical mean and variance only. So why not building a conditional batch norm on top of a full batch norm? |
Thank you for sharing the code. Can you please provide insights of Supervised WGAN with label input:
how is generator conditioned with label information? There is no one-hot label vector concat to the latent variable input. The label information is only used at the Conditional batch norm of the generator.
At the inference time, how do you force the Generator to produce certain class image? Where does the class input is used in the generator network?
The text was updated successfully, but these errors were encountered: