Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP at testing #2123

Open
gnoya opened this issue Dec 28, 2018 · 10 comments
Open

mAP at testing #2123

gnoya opened this issue Dec 28, 2018 · 10 comments
Labels

Comments

@gnoya
Copy link

gnoya commented Dec 28, 2018

Hi, I have read the detector.c code and it seems that the mAP calculation when using ./darknet detector map ... calculates the old mAP metric (calculating the precision for recall values of 0.1, 0.2, 0.3, ...). I had some doubts:

  1. In the YOLOv3's paper, the new mAP metric (from COCO) is shown as "AP" in Table 3, along with the old mAP metric, that is shown as AP50 and AP75. Are AP50 and AP75 the resulting values of using "./darknet detector map" with thresholds 0.50 and 0.75? How is the "AP" calculated? Is there an already implemented way of calculating it using some command?

  2. For PR curve graph: this repository gives you the values of precision for every recall point for every class, if I want to do the overall PR curve, do I take the mean of every class in every recall point?

Thanks!

@AlexeyAB
Copy link
Owner

@gnoya Hi,

  1. Yes, ./darknet detector map .... by default calculates mAP@IoU=0.50
    If you want to calculate mAP@IoU=0.75 then you should use ./darknet detector map ... -iou_thresh 0.75
    mAP@IoU=0.50 is calculated as average of APs for each class that are calculated for IoU-threshold=0.5 (50%). How mAP is calculated: https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173

  2. You can take the mean of every class in every recall point, but it will not the same as PR-curve that is built independently of the classes.


  • There is no old and new mAP. mAP is by default mAP@IoU=0.50

    • There is mAP for Pascal VOC and ImageNet (mAP@IoU=0.50 or simpy mAP)
    • There is AP@IoU=0.50 for MS COCO (the same as mAP for Pascal VOC and ImageNet)
    • There is AP@IoU=0.75 for MS COCO (or mAP@IoU=0.75)
    • There is AP@[.5, .95] for MS COCO (average of mAPs: AP@IoU=0.50, AP@IoU=0.55, ... AP@IoU=0.95)
  • mAP is used in Pascal VOC and ImageNet, and this is the same as AP@IoU=0.50 in MS COCO: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

But yes, authors of MSCOCO bring confusion: http://cocodataset.org/#detection-eval

  1. AP is averaged over all categories. Traditionally, this is called "mean average precision" (mAP). We make no distinction between AP and mAP (and likewise AR and mAR) and assume the difference is clear from context.

Also Jonathan Hui calls AP@[.5, .95] as mAP@[.5, .95]: https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359

FPN and Faster R-CNN*(using ResNet as the feature extractor) have the highest accuracy (mAP@[.5:.95]).
...
If mAP is calculated with one single IoU only, use mAP@IoU=0.75.

@gnoya
Copy link
Author

gnoya commented Dec 29, 2018

@AlexeyAB Thank you, is there a way to calculate AP@[.5, .95] with the current commit? If there is not, will it work if I change lines 938 and 939, so point will go from 0.5 to 0.95? Also change line 953 to divide into the new number of iterated points.

Thanks!

@AlexeyAB
Copy link
Owner

@gnoya

You should run several commands:

./darknet detector map obj.data yolo-obj.cfg yolo-obj.weights -iou_thresh 0.50
./darknet detector map obj.data yolo-obj.cfg yolo-obj.weights -iou_thresh 0.55
./darknet detector map obj.data yolo-obj.cfg yolo-obj.weights -iou_thresh 0.60
./darknet detector map obj.data yolo-obj.cfg yolo-obj.weights -iou_thresh 0.65
...
./darknet detector map obj.data yolo-obj.cfg yolo-obj.weights -iou_thresh 0.95

And then manually calculate average AP@[.5, .95] of these 10 mAPs.

@gnoya
Copy link
Author

gnoya commented Dec 30, 2018

@AlexeyAB Thanks! Last question, does "-thresh" (not -iou_thresh) parameter affects on AP calculation?

@AlexeyAB
Copy link
Owner

@gnoya No. -thresh doesn't affect on AP or mAP.

  • -thresh affects on: IoU, F1, TP/FP/FN, P/R - for the current Probability-threshold

  • -iou_thresh affects on APs and mAP

@Fetulhak
Copy link

@AlexeyAB @gnoya I am working on Yolov3 object detection for medical image analysis. I want to plot the P-R curve for my output result. How can I produce the 11 point values for recall and precision? I am using AlexeyAB's repo

@AlexeyAB
Copy link
Owner

@Fetulhak Un-comment this line, rebuild Darknet and run mAP calculation:

//printf("Precision = %1.2f, Recall = %1.2f, avg IOU = %2.2f%% \n\n", class_precision, class_recall, avg_iou_per_class[i]);

@Emirismail
Copy link

@Fetulhak did you manage to plot the the P-R curve please? If you did do, could you please share with us your suggestion.

@Fetulhak
Copy link

@Fetulhak did you manage to plot the the P-R curve please? If you did do, could you please share with us your suggestion.

@Emirismail Like Alexey said uncomment that print command and you will get the 11 point values for your evaluation dataset. Taking those 11 point precision values you can plot using matplotlib library simply by giving x and y data values for your plot curve. That is what I did to plot the P-R curve for my result analysis.

@VidhyaPPP
Copy link

But those precision and recall values for each class are calculated @ conf_thresh=0.25. How to get precision and recall values computed at confidence threshold varying from 0 to 1 for each class? How to see the individual PR curve for each class to get AP?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants