New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature MeanAveragePrecision 馃敟 #236
Conversation
@SkalskiP @mayankagarwals , this feature is not completed yet, but just to keep track. I am still facing some issue in result difference to other baselines method. I will let you guys know once it is ready to tests. |
No rush, @hardikdava! In the meantime, I'm putting some work into: #170. |
@SkalskiP model=yolov8n.pt
@mayankagarwals there is a bug, but I am not able to find it. Let me know if you have previous expirience with |
I do have some experience but it's been a while. Can you give me a colab to reproduce these numbers for supervision? I'll take a look |
The link of colab is attached in the description. |
Amazing, thanks! |
@SkalskiP @mayankagarwals the bug is fixed. We have a working code for |
@SkalskiP Should we add functions of plotting curves? or just values are fine? |
@SkalskiP Final report on values: model=yolov8n.pt
model=yolov8s.pt
|
@SkalskiP I have fixed the changes you requested. |
@SkalskiP everything good? merging it? |
@hardikdava I'm working on unit tests. We will merge it when I'm done. |
supervision/metrics/detection.py
Outdated
map (float): mAP value. | ||
map50 (float): mAP value at IoU `threshold = 0.5`. | ||
map75 (float): mAP value at IoU `threshold = 0.75`. | ||
average_precisions (np.ndarray): values for every classes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hardikdava could you help me out here? What is stored under the average_precisions
field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Average precision holds the result of ap at different iou threshold starting from 0.5 to 0.95 at an interval of 0.5 also the results are for each class. Columns represents classes and rows represents iou threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what would be the expected shape of this np array? (N, 10)? Where N is the amount of classes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, numpy array of (num of class, 10).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I Done some unit teats and it looks like the shape is different :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test 1
>>> import supervision as sv
>>> import numpy as np
>>> predictions = [
... np.empty((0, 6), dtype=np.float32),
... ]
>>> targets = [
... np.empty((0, 5), dtype=np.float32),
... ]
>>> sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0, map50=0, map75=0, average_precisions=[])
Test 2
>>> import supervision as sv
>>> import numpy as np
>>> predictions = [
... np.array([
... [0.0, 0.0, 1.0, 1.0, 1, 0.9],
... ]),
... ]
>>> targets = [
... np.array([
... [0.0, 0.0, 1.0, 1.0, 1],
... ]),
... ]
>>> sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0.9949999999999999, map50=0.995, map75=0.995, average_precisions=array([ 0.995]))
Test 3
>>> import supervision as sv
>>> import numpy as np
>>> predictions = [
... np.array([
... [0.0, 0.0, 1.0, 1.0, 1, 0.9],
... ]),
... np.array([
... [0.0, 0.0, 1.0, 1.0, 1, 0.9],
... ]),
... np.array([
... [0.0, 0.0, 1.0, 1.0, 2, 0.9],
... ]),
... ]
>>> targets = [
... np.array([
... [0.0, 0.0, 1.0, 1.0, 1],
... ]),
... np.array([
... [0.0, 0.0, 0.8, 0.8, 1, 0.9],
... ]),
... np.array([
... [0.0, 0.0, 1.0, 1.0, 2, 0.9],
... ]),
... ]
>>> result = sv.MeanAveragePrecision.from_tensors(predictions=predictions, targets=targets)
# MeanAveragePrecision(map=0.864625, map50=0.995, map75=0.80875, average_precisions=array([ 0.73425, 0.995]))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are simply average precisions per class. Not per threshold but already averaged.
@hardikdava is it what you intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh yeah, average_precision is already per class map values. Sorry for miscommunication. This is intended.
Description
This pr is related to feature #206 and #232.
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
Google colab for testing:https://colab.research.google.com/drive/1MT5wc6H8WPshQfPoQITr6mLOcfd47x5E?usp=sharing
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs
docs/supervision/metrics/detection
Basline Metrics for comparision: