Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

labels as input to cgan combined model #105

Open
zhchyang2004 opened this issue Dec 11, 2018 · 6 comments
Open

labels as input to cgan combined model #105

zhchyang2004 opened this issue Dec 11, 2018 · 6 comments

Comments

@zhchyang2004
Copy link

zhchyang2004 commented Dec 11, 2018

Regarding the parameter on Line 152 in Keras-GAN/cgan/cgan.py, does it make more sense to replace the input 'sampled_labels' with 'labels' defined in Line 131?

Line 131:
imgs, labels = X_train[idx], y_train[idx]
Line 152:
g_loss = self.combined.train_on_batch([noise, sampled_labels], valid)

It is desired and appreciated to get your idea on it.

@ghost
Copy link

ghost commented Dec 12, 2018

the G model needs two inputs(noise and condition),and D model needs what G generates and the label(what number G model generates)
sampled_labels means the number that G model should generate.
Sorry,English is not my native language.

@zhchyang2004
Copy link
Author

Thanks CreeperGo. I understand your meaning. But I don't think the label feeding to Discriminator should be generated by Generator. Labels feeding to Discriminator should be as same as those to Generator.

It is cited in '3.2 Conditional AdversarialNets' of the Paper via https://arxiv.org/abs/1411.1784,
'Generative adversarial nets can be extended to a conditional model if both the generator and discriminator are conditioned on some extra information y. y could be any kind of auxiliary information, such as class labels or data from other modalities. We can perform the conditioning by feeding y into the both the discriminator and generator as additional input layer.'

So, y should be the same one feeding into both the discriminator and generator. Right?

@ghost
Copy link

ghost commented Dec 13, 2018

So, y should be the same one feeding into both the discriminator and generator. Right?

nope.
Comparing with GAN,CGAN gives extra conditions to both G and D model.The G model learns from the gradients from D model.G model turns noise into generated samples.

@ghost
Copy link

ghost commented Dec 13, 2018

GAN: [noise] -> G;[generated samples,true samples]->D;
CGAN:[noise,conditions] -> G; [generated samples with conditions,true samples with conditions]->D;

@eriklindernoren
Copy link
Owner

Hi @CreeperGO. The discriminator evaluates whether the image samples are valid examples of the digit labels which it also receives as inputs. The randomly sampled labels which the generator tries to generate are fed to the discriminator together with the generator samples, where the generators objective is to have those samples being labeled as valid given the digit labels. Hope this clarifies.

@daa233
Copy link

daa233 commented Jan 3, 2020

Hi @eriklindernoren! Thanks for your interpretation.

However, I have the same question as @zhchyang2004 issued.

I have seen several CGAN implementations. There are two ways to use the condition labels:

  1. feed G with a random label, feed D with the real label
  2. feed both G and D with the same real label

Here are my questions:

  • Which way is used by the CGAN paper in 2014?
  • When training the CGAN, should we use the same condition label for both G and D?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants