Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between Precision and mAP50:95? #8039

Closed
1 task done
yolor2 opened this issue May 30, 2022 · 8 comments
Closed
1 task done

Difference between Precision and mAP50:95? #8039

yolor2 opened this issue May 30, 2022 · 8 comments
Labels
question Further information is requested Stale

Comments

@yolor2
Copy link

yolor2 commented May 30, 2022

Search before asking

Question

Could you please tell me the difference between prececision and mAP50:95 as while running sweeps, for some sweeps i am getting more precision like 84% and mAP50:95 around 54, while for some sweeeps both are relatively same. SO what could be the reason for that?

Additional

No response

@yolor2 yolor2 added the question Further information is requested label May 30, 2022
@glenn-jocher
Copy link
Member

See https://en.wikipedia.org/wiki/Precision_and_recall

@yolor2
Copy link
Author

yolor2 commented Jun 3, 2022

@glenn-jocher what do you infer if the precision is around 80% and the mAP50_95 is around 70 and mAP50 around 90

@glenn-jocher
Copy link
Member

@yolor2 you can ignore P and R, they are relative metrics which depend on a confidence threshold. mAP is an absolute metrics that is not a function of confidence threshold.

@yolor2
Copy link
Author

yolor2 commented Jun 15, 2022

@yolor2 you can ignore P and R, they are relative metrics which depend on a confidence threshold. mAP is an absolute metrics that is not a function of confidence threshold.

@glenn-jocher If mAP is an absolute metrics then why change in confidence threshold changes the values of mAP? could you please help me this dilemma

@glenn-jocher
Copy link
Member

@yolor2 mAP is not valid at any other value than the default confidence threshold or lower. Ideally it is only ever computed at 0.0 confidence by definition.

There is no correct mAP calculation at any other confidence.

@yolor2
Copy link
Author

yolor2 commented Jun 20, 2022

@glenn-jocher hello, what do you think is the best for object detection task : mAP50 or mAP50:95 and why? because i got mAP50 around 90% and mAP50:95 around 60%. So i am trying to find the justification of this gap.
Could you please help.

@glenn-jocher
Copy link
Member

@yolor2 mAP@0.5 is the official VOC metric and mAP@0.5:0.95 is the official COCO metric. There is no 'best' metric, but mAP@0.5:0.95 is the most widely recognized object detection metric.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 21, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants