-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mAP question #16
Comments
It's perfectly fine to ask here. Give me a sec and I will explain it to you. Could you please provide the plot of the AP of the class |
So, first of all, I recommend you to see this video: Basically, the mAP is a single-number metric used to evaluate rankings. In practice the higher the confidence of a detection (from 0% to 100%) the more important it will be. Specifically what happens at rank 1 is twice as important as what happens at rank 2. So it tells how good is your detector taking into account the confidence for each prediction. The AP is calculated by the area (shown in As you can see from the left plot the false predictions are all concentrated in the end and probably associated with low confidence levels, meaning that in terms of mAP you have a very good model. In this case if you find the right threshold for this class that will remove those last points in the end (try for example a threshold of 0.1) you can even get a higher AP. If you get creative you can even find the right threshold for each class.
mAP is the standard metric used in the research papers. Your model seems to be working very well (in fact it even seems too good to be true). You can also have a look at other metrics like ROC curve. |
This is a great explanation, now i have more good intuition about mAP
I have try 0.1, 0.05, 0,03, 0,02, and 0,01 threshold, the best mAP is 0.02 (93.33%) this is better 0.01% than 0.01 (93.32%), but i think mAP doesn't significantly decrease to the increasement of false prediction (for low rank/confidence) , am i right ? and what do you think about F1-value for object detection evaluation ? |
Yeah, you are right, it didn't make much difference since they are the last ranks! Well, it truly depends on your application, the F1 value is used in the ROC curves so watch some videos about it (there are great ones on youtube). Basically, it depends on how many false detections do you want to allow. First, try to understand what's Precision and Recall. F1 is just a way to balance them both. |
ok thank you so much sir, i'll learn more |
Hello @Cartucho , |
Hello @Cartucho i have some question about mAP.
as far as i know the mAP is method for evaluate object detection task, but i have confuse for the result.
I try to set a different threshold and i got mAP and predicted object. when i set the threshold very low (0.01) i got higher mAP but more false prediciton and when i set the threshold to 0.5 i got lower mAP but fewer false prediction, like pic below
i'm newbie in object detection, but i think the more false prediction mean lower mAP, am i right ?
another question, is the mAP doesn't represent the object detection performance ? or there are another way to evaluate object detection task ?
i'm sorry when this question is not proper to ask here, if yes i will close/delete it asap.
thank you.
The text was updated successfully, but these errors were encountered: