Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mean average precision metric for object detection #467

Merged
merged 196 commits into from
Oct 27, 2021
Merged

Add mean average precision metric for object detection #467

merged 196 commits into from
Oct 27, 2021

Conversation

tkupek
Copy link
Contributor

@tkupek tkupek commented Aug 19, 2021

Mean Average Precision (mAP) for object detection

New metric for object detection.

What does this PR do?

This PR introduces the commonly used mean average precision metric for object detection.
As there are multiple different implementations, and even different calculations, the new metric wraps the pycocotools evaluation, which is used as a standard for several academic and open-source projects for evaluation.

This metric is actively discussed in issue, resolves #53

TODO

  • check if pycocoeval can handle tensors to avoid .cpu() calls (it cannot)
  • standardize MAPMetricResults to have all evaluation results in there
  • refactor some code parts (e.g. join get_coco_target and get_coco_preds methods)
  • add unittests and documentation in torchmetrics format

Note

This is my first contribution to the PyTorchLightning project. Please review the code carefully and give me hints on how to improve and match your guidelines.

@codecov
Copy link

codecov bot commented Aug 19, 2021

Codecov Report

Merging #467 (13ec1ee) into master (b2240e1) will decrease coverage by 0%.
The diff coverage is 93%.

@@          Coverage Diff           @@
##           master   #467    +/-   ##
======================================
- Coverage      95%    95%    -0%     
======================================
  Files         148    150     +2     
  Lines        5142   5307   +165     
======================================
+ Hits         4897   5050   +153     
- Misses        245    257    +12     

@Borda Borda added enhancement New feature or request New metric labels Aug 19, 2021
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
torchmetrics/image/map.py Outdated Show resolved Hide resolved
@SkafteNicki SkafteNicki added this to the v0.6 milestone Aug 19, 2021
tkupek and others added 7 commits August 19, 2021 13:24
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
torchmetrics/detection/map.py Show resolved Hide resolved
torchmetrics/detection/__init__.py Show resolved Hide resolved
@mergify mergify bot added the ready label Oct 27, 2021
@Borda
Copy link
Member

Borda commented Oct 27, 2021

Any why is this massive test failer for PT 1.4?

E   ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Update: https://stackoverflow.com/questions/66060487/valueerror-numpy-ndarray-size-changed-may-indicate-binary-incompatibility-exp

Copy link

@senarvi senarvi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks great to me.

@mergify mergify bot removed the ready label Oct 27, 2021
@mergify mergify bot added the ready label Oct 27, 2021
@Borda Borda merged commit 0b62e00 into Lightning-AI:master Oct 27, 2021
@tkupek
Copy link
Contributor Author

tkupek commented Oct 28, 2021

Thank you everybody for your support 🙂

@Borda
Copy link
Member

Borda commented Oct 28, 2021

Thank you everybody for your support slightly_smiling_face

the pleasure is on our side, Thx for your work and patience... 🐰

@SkafteNicki
Copy link
Member

Thanks everybody for their input on getting this metric implemented 😄
If anybody feels like it, it would be great to have an example in this folder:
https://github.com/PyTorchLightning/metrics/tree/master/tm_examples
as this is a bit more complicated metric to use than other.

@tkupek tkupek deleted the mean-average-precision branch November 15, 2021 21:36
@elstonaug
Copy link

Hello, One question - do we need to exclude class 0 which is the background class when updating / handling the metrics?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Mean Average Precision (mAP) metric