New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualize model predictions (get scores and boxes) #2
Comments
Hi @krxxxxxxxanc , You can add following code in the end to filter out false positives with low confidence: keep = scores[0] > threshold
boxes = boxes[0, keep]
labels = labels[0, keep] You could simply set |
Thank you! When I calculate mAP for my dataset and set a score threshold there, I always get a worse result than without a threshold. But in my box visualization I can clearly see a lot of false positives. Is this a flaw in the mAP metric? I asked the same question for the original DETR, where its more extrem. |
Alright I think I get it now. In the COCO AP definition you calculate how the N (= 100 in this case) best predictions (highest scores) perform. Because of ranking them in descending order, False Positives with low confidence don't have a lot of impact on the AP. In a real use case you wouldn't use these 100 predictions, but set a score threshold. If you agree with me, you can close this issue 👍 |
Yes. I agree with you. The threshold at least can't result in a positive impact on AP and may filter out "true positive" as well. |
Is there information leakeay when tuning threshold based on test images? |
Hello,
thank you for your great work!
I want to test the performance of my network on some test images. For this I visualize the predicted boxes and scores on the images. I got everything working as I can use my code from the original DETR, but I was wondering how to get the correct scores and labels.
For DETR I did:
Since you use sigmoid function for DeformableDETR I replaced these lines with:
(Heavily inspired by PostProcess class from deformable_detr.py 😄)
With this I get a lot of false positives. The scores are pretty low compared to softmax scores, so which threshold would you recommend to get rid of the false positives?
The text was updated successfully, but these errors were encountered: