Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-pixel confidence loss question #4

Closed
phongnhhn92 opened this issue Dec 13, 2018 · 1 comment
Closed

Per-pixel confidence loss question #4

phongnhhn92 opened this issue Dec 13, 2018 · 1 comment

Comments

@phongnhhn92
Copy link

Hello,
In the equation (5) in your paper, there is a element-wise square operator like this:
eq5
I believe this loss function was defined in this piece of code
l1_loss += tf.reduce_mean(loss_map * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight

I wonder it should be like this
l1_loss += tf.reduce_mean((loss_map**2) * normalized_mask) * current_weight / \ (int(img.get_shape()[1]) * int(img.get_shape()[2])) * regularizer_weight

Can u verify it is correct ? If your code is correct then I am sorry for my question. I just confused when I see the loss like that. Thanks !

@shaohua0116
Copy link
Owner

The code uses the l1 norm (as the loss map takes absolute values), which performs similarly to the element-wise square operator in the paper. To use the one mentioned in the paper, you can change the code to tf.reduce_mean(loss_map * normalized_mask) ** 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants