New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mAP Code in test.py #222
Comments
@fereenwong can you test against COCO to see what the effect of your change is on mAP? |
@glenn-jocher mAP drops to 23.7 |
@fereenwong the mAP produced by this repo has been validated against pycocotools to within 1%: You can see at 416 pixels, we report 0.57 mAP, and pycocotools analysis of our JSON returns 0.565. At 608 pixels we report 0.611 mAP vs pycocotools 0.608. If your proposed change causes our mAP to deviate substantially from pycocotools, we naturally can not accept it. |
@glenn-jocher And I found that something changed in line 128 in test.py: Then, I did a test with the commented condition as below: |
@fereenwong yes I tried this change, but like you said the mAP drops (from 0.570 to 0.546) compared to the pycocotools mAP (0.565), so the alignment is closer without the update. I'm not sure of the cause. |
@fereenwong this should be fixed now in commit e1850bf. |
mAP bug is resolved with commit 84f0df6. Problem was continuous integral vs 101-point interpolated integral (COCO method). UPDATE: mAP only matches at 0.001 conf-thres, but computes much higher at 0.1 conf-thres. Problem remains. |
Thanks for sharing.
For the code in line 125 in test.py, it seems to be a mistake as a correct prediction requires both correct classification and accuracy localization. In my view, the code below:
iou, bi = bbox_iou(pbox, tbox).max(0)
should be changed as:
iou, bi = bbox_iou(pbox, tbox[tcls == pcls]).max(0)
I am not sure if I am right, so feel free to correct me if I am wrong.
The text was updated successfully, but these errors were encountered: