Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between EMD loss and smoothL1? #14

Closed
ming71 opened this issue Jun 6, 2020 · 5 comments
Closed

Difference between EMD loss and smoothL1? #14

ming71 opened this issue Jun 6, 2020 · 5 comments

Comments

@ming71
Copy link

ming71 commented Jun 6, 2020

Your implementation of EMD loss here seems to be the same as smoothL1 loss between anchor and its pred boxes.What's the difference them?Maybe I don't figure it out clearly..

@xg-chu
Copy link
Owner

xg-chu commented Jun 6, 2020

The EMD Loss calculates the minimum loss between the two sets. The single loss between each element in the two sets is measured by SmoothL1 and softmax Cross Entropy.

@ming71
Copy link
Author

ming71 commented Jun 6, 2020

For each element in anchor set , two boxes will be regressed, then you apply smoothl1 and cross entropy to it ....It's almost the same as traditional loss do.In this way , I can also regard traditional single loss as EMD Los between anchor set and pred_boxes set while num of pred_boxes is equal to that in anchor set , right?

@ming71
Copy link
Author

ming71 commented Jun 6, 2020

In details , given anchor set A in shape of (1, 1, 4) (bs=1, num_anchor=1, 4=xywh),and pred boxes set B in shape of (1, 2, 4), when we calc EMD loss for regression, we can obtaion res = smoothl1(A[0,0], B[0,0]) + smoothl1(A[0,0], B[0,1]) .
for single loss, pred boxes set B will be in shape of (1, 1, 4), and result will be res = smoothl1(A[0,0], B[0,0]) .

sorry to disturb you, is there something wrong with my understanding?Thank you.

@xg-chu
Copy link
Owner

xg-chu commented Jun 7, 2020

The anchor is regressed by two parallel prediction head. The anchor A is predicted as P_head0 and P_head1.
The loss is between P_heads and ground truths.
The EMDLoss calculate the min ((L(p_head0, B0) + L(p_head1, B1)),(L(p_head0, B1) + L(p_head1, B0))). Ensure that the network can always be optimized in the optimal direction.

@ming71
Copy link
Author

ming71 commented Jun 8, 2020

thanks for your reply, I make it to understand now

@ming71 ming71 closed this as completed Jun 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants