Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision-Recall Curve feature update #1206

Merged
merged 2 commits into from
Oct 25, 2020
Merged

Precision-Recall Curve feature update #1206

merged 2 commits into from
Oct 25, 2020

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Oct 25, 2020

This is a follow up to Precision-Recall Curve Feature Addition #1107.

This PR improves PR curve plotting and adds mAP@0.5 to the PR curve plot legend. Modifications to the curve computation allow us to better align with the pycocotools mAP results. This is done by skipping sentinel value concatenation in utils/general.py L331.

BEFORE

Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', img_size=640, iou_thres=0.65, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])
Using CUDA device0 _CudaDeviceProperties(name='Tesla V100-SXM2-16GB', total_memory=16130MB)

Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5x.pt to yolov5x.pt...
100% 170M/170M [00:05<00:00, 32.2MB/s]

Fusing layers... 
Model Summary: 284 layers, 8.89222e+07 parameters, 0 gradients
Scanning labels ../coco/labels/val2017.cache (4952 found, 0 missing, 48 empty, 0 duplicate, for 5000 images): 5000it [00:00, 17859.20it/s]
               Class      Images     Targets           P           R      mAP@.5  mAP@.5:.95: 100% 157/157 [01:08<00:00,  2.30it/s]
                 all       5e+03    3.63e+04       0.409       0.754       0.669       0.476  < ---------- BEFORE
Speed: 5.8/1.4/7.2 ms inference/NMS/total per 640x640 image at batch-size 32

COCO mAP with pycocotools... saving detections_val2017__results.json...
loading annotations into memory...
Done (t=0.38s)
creating index...
index created!
Loading and preparing results...
DONE (t=4.39s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=80.33s).
Accumulating evaluation results...
DONE (t=11.94s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.491  < ---------- PYCOCOTOOLS @.5:.95
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.676  < ---------- PYCOCOTOOLS @.5
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.534
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.318
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.540
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.633
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.376
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.616
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.670
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.493
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.723
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.812

AFTER

Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', img_size=640, iou_thres=0.65, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])
Using CUDA device0 _CudaDeviceProperties(name='Tesla V100-SXM2-16GB', total_memory=16130MB)

Fusing layers... 
Model Summary: 284 layers, 8.89222e+07 parameters, 0 gradients
Scanning labels ../coco/labels/val2017.cache (4952 found, 0 missing, 48 empty, 0 duplicate, for 5000 images): 5000it [00:00, 16272.92it/s]
               Class      Images     Targets           P           R      mAP@.5  mAP@.5:.95: 100% 157/157 [01:09<00:00,  2.26it/s]
                 all       5e+03    3.63e+04       0.409       0.754       0.672       0.483  < ---------- AFTER
Speed: 5.8/1.4/7.2 ms inference/NMS/total per 640x640 image at batch-size 32

COCO mAP with pycocotools... saving detections_val2017__results.json...
loading annotations into memory...
Done (t=0.38s)
creating index...
index created!
Loading and preparing results...
DONE (t=4.39s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=80.33s).
Accumulating evaluation results...
DONE (t=11.94s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.491  < ---------- PYCOCOTOOLS @.5:.95
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.676  < ---------- PYCOCOTOOLS @.5
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.534
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.318
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.540
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.633
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.376
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.616
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.670
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.493
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.723
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.812

UPDATED PLOT
precision-recall_curve

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhanced precision-recall curve plotting and AP calculation in Ultralytics YOLOv5.

📊 Key Changes

  • Improved interpolation of precision at mAP@0.5 during AP calculation.
  • Updated the plotting method to display mAP@0.5 directly on precision-recall curves.
  • Increased resolution of the interpolation method in AP calculation from 101 to 1001 points.
  • Simplified the modification of recall and precision arrays by removing unnecessary sentinel value concatenation.
  • Added error handling in plot_results to ensure results.txt files exist before plotting.

🎯 Purpose & Impact

  • 🎨 Provides more accurate visualization by plotting the mean average precision directly on the graphs, aiding in interpretability.
  • 📈 Increases the granularity of the average precision calculation, potentially improving the evaluation metric's reliability.
  • 🔧 Streamlines AP calculation code by removing redundant steps, enhancing code maintainability.
  • ⚠️ Introduces a check for the existence of input files to prevent errors during result plotting, improving user experience.

These changes will likely make it easier for users to understand model performance metrics and will contribute to more robust model evaluation procedures.

@glenn-jocher glenn-jocher merged commit ed85038 into master Oct 25, 2020
@glenn-jocher glenn-jocher deleted the pr_curve branch October 25, 2020 11:55
glenn-jocher added a commit that referenced this pull request Oct 25, 2020
burglarhobbit pushed a commit to burglarhobbit/yolov5 that referenced this pull request Jan 1, 2021
* Precision-Recall Curve feature update

* sentinel value update
burglarhobbit pushed a commit to burglarhobbit/yolov5 that referenced this pull request Jan 1, 2021
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
* Precision-Recall Curve feature update

* sentinel value update
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* Precision-Recall Curve feature update

* sentinel value update
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant