-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mAP at testing #2123
Comments
@gnoya Hi,
But yes, authors of MSCOCO bring confusion: http://cocodataset.org/#detection-eval
Also Jonathan Hui calls
|
@AlexeyAB Thank you, is there a way to calculate AP@[.5, .95] with the current commit? If there is not, will it work if I change lines 938 and 939, so point will go from 0.5 to 0.95? Also change line 953 to divide into the new number of iterated points. Thanks! |
You should run several commands:
And then manually calculate average |
@AlexeyAB Thanks! Last question, does "-thresh" (not -iou_thresh) parameter affects on AP calculation? |
@gnoya No.
|
@Fetulhak did you manage to plot the the P-R curve please? If you did do, could you please share with us your suggestion. |
@Emirismail Like Alexey said uncomment that print command and you will get the 11 point values for your evaluation dataset. Taking those 11 point precision values you can plot using matplotlib library simply by giving x and y data values for your plot curve. That is what I did to plot the P-R curve for my result analysis. |
But those precision and recall values for each class are calculated @ conf_thresh=0.25. How to get precision and recall values computed at confidence threshold varying from 0 to 1 for each class? How to see the individual PR curve for each class to get AP? |
Hi, I have read the detector.c code and it seems that the mAP calculation when using ./darknet detector map ... calculates the old mAP metric (calculating the precision for recall values of 0.1, 0.2, 0.3, ...). I had some doubts:
In the YOLOv3's paper, the new mAP metric (from COCO) is shown as "AP" in Table 3, along with the old mAP metric, that is shown as AP50 and AP75. Are AP50 and AP75 the resulting values of using "./darknet detector map" with thresholds 0.50 and 0.75? How is the "AP" calculated? Is there an already implemented way of calculating it using some command?
For PR curve graph: this repository gives you the values of precision for every recall point for every class, if I want to do the overall PR curve, do I take the mean of every class in every recall point?
Thanks!
The text was updated successfully, but these errors were encountered: