You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I want to use L1 loss between the generated image by generator and the ground-truth image, but some images have ground-truth and some others don't. That is to say, in a batch, some have ground-truth and some not. I only will use l1 loss with the images which have ground-truth.
In this problem, is there a way to address it?
Thank you!
The text was updated successfully, but these errors were encountered:
Generally the way you would do something like this in Tensorflow is to have some dummy value for the ground truth when it's not available (like an all 0's tensor, for example) and keep a binary mask of shape [batch_size] that has value 1 if there is ground truth and 0 otherwise. You can then use this as a weight applied to the per-sample loss. Your final loss would be something like: loss = tf.reduce_mean(weight_mask * tf.losses.mean_absolute_error(ground_truth, generated)
It's a great work.
If I want to use L1 loss between the generated image by generator and the ground-truth image, but some images have ground-truth and some others don't. That is to say, in a batch, some have ground-truth and some not. I only will use l1 loss with the images which have ground-truth.
In this problem, is there a way to address it?
Thank you!
The text was updated successfully, but these errors were encountered: