Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditioning Generator with label information #55

Closed
ghost opened this issue Dec 1, 2017 · 7 comments
Closed

Conditioning Generator with label information #55

ghost opened this issue Dec 1, 2017 · 7 comments

Comments

@ghost
Copy link

ghost commented Dec 1, 2017

Thank you for sharing the code. Can you please provide insights of Supervised WGAN with label input:

  1. how is generator conditioned with label information? There is no one-hot label vector concat to the latent variable input. The label information is only used at the Conditional batch norm of the generator.

  2. At the inference time, how do you force the Generator to produce certain class image? Where does the class input is used in the generator network?

@ghost
Copy link
Author

ghost commented Dec 1, 2017

The Conditional batch norm does the trick:)

Never knew that a small set of shift&scale parameters encodes class information. Kudos!

@ghost ghost closed this as completed Dec 2, 2017
@ysharma1126
Copy link

So how do you force the generator to produce a certain class image at inference time?

@ghost
Copy link
Author

ghost commented Mar 30, 2018

The batch-norm layer in the Generator is conditioned on the label information.
It has unique scale & shift normalization parameters for each label.

Based on the label input at inference time, it chooses the corresponding params to produce a class-specific image from the given latent vector

@pierremac
Copy link

I understand that it makes the implementation lighter, especially when you want to reuse the same code for classic and conditional WGAN.
But do you (or anyone) know how this compares to actually feeding the label as an input (like concatenating the conditions to the noise) to the generator, like in some other implementations?
Also, I guess that this works for categorical conditions with one-hot encoding, but doesn't really apply to continuous conditions or non-exclusive categories, right?

@ghost
Copy link
Author

ghost commented Apr 6, 2018

The current code is conditioned in a discrete way, selecting single class only.

Having said that, I came across some works on WGAN-GP in ICLR 2018- Projection Discriminator GAN, Spectral Normalization GAN, where the conditional batch-norm params of two classes are interpolated to produce a morphed image.
please check for demo.. https://www.youtube.com/watch?v=hgJpyW4WIko

you may use this technique to enforce a continuous conditioning at inference time.

And even i didn't come across the comparative study of one-hot concat conditioning vs conditional batch norm.

@pierremac
Copy link

I had not come across those yet! Thanks a lot for the pointers.

@HarveyYan
Copy link

What looks weird is this conditional batch norm doesn't track statistics such as moving mean and moving variance when training. At inference, it normalize a batch of test samples using its empirical mean and variance only. So why not building a conditional batch norm on top of a full batch norm?

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants