Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L1 loss problem with GAN if some data has ground-truth and some not #39

Open
HymEric opened this issue Dec 8, 2020 · 2 comments
Open

Comments

@HymEric
Copy link

HymEric commented Dec 8, 2020

It's a great work.

If I want to use L1 loss between the generated image by generator and the ground-truth image, but some images have ground-truth and some others don't. That is to say, in a batch, some have ground-truth and some not. I only will use l1 loss with the images which have ground-truth.

In this problem, is there a way to address it?
Thank you!

@surya1701
Copy link

Hi @HymEric
Maybe you could try using an external function such that g(ground-truth) --> generated, before using L1 loss, as mentioned in junyanz/pytorch-CycleGAN-and-pix2pix#293 (comment)

@aaronsarna
Copy link
Contributor

Generally the way you would do something like this in Tensorflow is to have some dummy value for the ground truth when it's not available (like an all 0's tensor, for example) and keep a binary mask of shape [batch_size] that has value 1 if there is ground truth and 0 otherwise. You can then use this as a weight applied to the per-sample loss. Your final loss would be something like:
loss = tf.reduce_mean(weight_mask * tf.losses.mean_absolute_error(ground_truth, generated)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants