Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Considering label scores in loss function #6

Closed
ErezBeyond opened this issue Jun 15, 2020 · 4 comments
Closed

Considering label scores in loss function #6

ErezBeyond opened this issue Jun 15, 2020 · 4 comments

Comments

@ErezBeyond
Copy link

I can't seem to find where in the code is the score of a pseudo box is being taken into account. Specifically, where can we see the effect of zero scored boxes (those that didn't pass the confidence threshold).
To the best of my inquiry, it is missing from the code, though emphasized in the paper.
Thanks!

@zizhaozhang
Copy link
Collaborator

Pls see this line

mask = pseudo_gt["scores"] >= true_confidence

@ErezBeyond
Copy link
Author

Thanks for the reply!
It seem to me that this eventually removes pseudo boxes with lower scores. But this is different than giving them a zero weight, isn't it?
For instance, if the prediction complies with some lower score box, you will penalize for it as a false detection, rather than ignore it (zero weight). Am I missing something here?

@zizhaozhang
Copy link
Collaborator

Hi

By design in eq (2) of the paper, we use hard threshold so the w(x) is either 1 or 0 (clearly specified in paper). For 0, we directly remove them.

@ErezBeyond
Copy link
Author

Thank you.
For completeness, I would say that a zero weight is not necessarily equivalent to removing the box. It seems that the formulation in the paper is better expressed in setting these boxes as "don't care" areas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants