You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
In the equation (5) in your paper, there is a element-wise square operator like this:
I believe this loss function was defined in this piece of code l1_loss += tf.reduce_mean(loss_map * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight
I wonder it should be like this l1_loss += tf.reduce_mean((loss_map**2) * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight
Can u verify it is correct ? If your code is correct then I am sorry for my question. I just confused when I see the loss like that. Thanks !
The text was updated successfully, but these errors were encountered:
The code uses the l1 norm (as the loss map takes absolute values), which performs similarly to the element-wise square operator in the paper. To use the one mentioned in the paper, you can change the code to tf.reduce_mean(loss_map * normalized_mask) ** 2.
Hello,
In the equation (5) in your paper, there is a element-wise square operator like this:
I believe this loss function was defined in this piece of code
l1_loss += tf.reduce_mean(loss_map * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight
I wonder it should be like this
l1_loss += tf.reduce_mean((loss_map**2) * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight
Can u verify it is correct ? If your code is correct then I am sorry for my question. I just confused when I see the loss like that. Thanks !
The text was updated successfully, but these errors were encountered: