Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature MeanAveragePrecision 馃敟 #236

Merged
merged 26 commits into from Aug 7, 2023

Conversation

hardikdava
Copy link
Collaborator

@hardikdava hardikdava commented Jul 24, 2023

Description

This pr is related to feature #206 and #232.

Type of change

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How has this change been tested, please provide a testcase or example of how you tested the change?

Google colab for testing:https://colab.research.google.com/drive/1MT5wc6H8WPshQfPoQITr6mLOcfd47x5E?usp=sharing

Any specific deployment considerations

For example, documentation changes, usability, usage/costs, secrets, etc.

Docs

  • Docs updated at docs/supervision/metrics/detection

Basline Metrics for comparision:

  • Ultralytics
  • Torchmetrics (utilizes CocoApi)

@hardikdava
Copy link
Collaborator Author

@SkalskiP @mayankagarwals , this feature is not completed yet, but just to keep track. I am still facing some issue in result difference to other baselines method. I will let you guys know once it is ready to tests.

@SkalskiP
Copy link
Collaborator

No rush, @hardikdava! In the meantime, I'm putting some work into: #170.

@hardikdava
Copy link
Collaborator Author

hardikdava commented Jul 25, 2023

@SkalskiP
Comparision between different baselines:

model=yolov8n.pt

Framework map50 map75 map
Ultralytics 0.605 - 0.44
Torchmetrics 0.6145 0.4786 0.4473
Supervision 0.56066 0.48054 0.432033

@mayankagarwals there is a bug, but I am not able to find it. Let me know if you have previous expirience with mAP.

@mayankagarwals
Copy link
Contributor

I do have some experience but it's been a while. Can you give me a colab to reproduce these numbers for supervision? I'll take a look

@hardikdava
Copy link
Collaborator Author

I do have some experience but it's been a while. Can you give me a colab to reproduce these numbers for supervision? I'll take a look

The link of colab is attached in the description.

@mayankagarwals
Copy link
Contributor

The link of colab is attached in the description.

Amazing, thanks!

@hardikdava
Copy link
Collaborator Author

@SkalskiP @mayankagarwals the bug is fixed. We have a working code for mAP.

@hardikdava
Copy link
Collaborator Author

@SkalskiP Should we add functions of plotting curves? or just values are fine?

@hardikdava
Copy link
Collaborator Author

hardikdava commented Jul 26, 2023

@SkalskiP Final report on values:

model=yolov8n.pt

Framework map50 map75 map
Ultralytics 0.605 - 0.44
Torchmetrics 0.6145 0.4786 0.4473
Supervision 0.6129 0.4807 0.448

model=yolov8s.pt

Framework map50 map75 map
Ultralytics 0.759 - 0.585
Torchmetrics 0.7573 0.6331 0.5820
Supervision 0.7577 0.63638 0.585

@SkalskiP SkalskiP marked this pull request as ready for review August 6, 2023 09:07
@SkalskiP SkalskiP added enhancement New feature or request version: 0.13.0 Feature to be added in `0.13.0` release labels Aug 6, 2023
@SkalskiP SkalskiP added this to the version: 0.13.0 milestone Aug 6, 2023
supervision/metrics/detection.py Outdated Show resolved Hide resolved
supervision/metrics/detection.py Outdated Show resolved Hide resolved
supervision/metrics/detection.py Outdated Show resolved Hide resolved
supervision/metrics/detection.py Outdated Show resolved Hide resolved
supervision/metrics/detection.py Outdated Show resolved Hide resolved
@hardikdava
Copy link
Collaborator Author

@SkalskiP I have fixed the changes you requested.

@hardikdava
Copy link
Collaborator Author

@SkalskiP everything good? merging it?

@SkalskiP
Copy link
Collaborator

SkalskiP commented Aug 6, 2023

@hardikdava I'm working on unit tests. We will merge it when I'm done.

map (float): mAP value.
map50 (float): mAP value at IoU `threshold = 0.5`.
map75 (float): mAP value at IoU `threshold = 0.75`.
average_precisions (np.ndarray): values for every classes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hardikdava could you help me out here? What is stored under the average_precisions field?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Average precision holds the result of ap at different iou threshold starting from 0.5 to 0.95 at an interval of 0.5 also the results are for each class. Columns represents classes and rows represents iou threshold.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what would be the expected shape of this np array? (N, 10)? Where N is the amount of classes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, numpy array of (num of class, 10).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I Done some unit teats and it looks like the shape is different :/

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test 1

>>> import supervision as sv
>>> import numpy as np

>>> predictions = [
...     np.empty((0, 6), dtype=np.float32),
... ]
>>> targets = [
...     np.empty((0, 5), dtype=np.float32),
... ]

>>> sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0, map50=0, map75=0, average_precisions=[])

Test 2

>>> import supervision as sv
>>> import numpy as np

>>> predictions = [
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 1, 0.9],
...     ]),
... ]
>>> targets = [
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 1],
...     ]),
... ]

>>> sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0.9949999999999999, map50=0.995, map75=0.995, average_precisions=array([      0.995]))

Test 3

>>> import supervision as sv
>>> import numpy as np

>>> predictions = [
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 1, 0.9],
...     ]),
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 1, 0.9],
...     ]),
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 2, 0.9],
...     ]),
... ]
>>> targets = [
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 1],
...     ]),
...     np.array([
...         [0.0, 0.0, 0.8, 0.8, 1, 0.9],
...     ]),
...     np.array([
...         [0.0, 0.0, 1.0, 1.0, 2, 0.9],
...     ]),
... ]

>>> result = sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0.864625, map50=0.995, map75=0.80875, average_precisions=array([    0.73425,       0.995]))

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are simply average precisions per class. Not per threshold but already averaged.

@hardikdava is it what you intended?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohh yeah, average_precision is already per class map values. Sorry for miscommunication. This is intended.

@SkalskiP SkalskiP merged commit 525e1b0 into roboflow:develop Aug 7, 2023
4 checks passed
@SkalskiP SkalskiP mentioned this pull request Aug 7, 2023
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request version: 0.13.0 Feature to be added in `0.13.0` release
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

3 participants