Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue about mAP #20

Closed
ericxian1997 opened this issue Jan 4, 2019 · 1 comment
Closed

Issue about mAP #20

ericxian1997 opened this issue Jan 4, 2019 · 1 comment

Comments

@ericxian1997
Copy link

It seems that values of rank1 and mAP produced by state-of-the-art methods on CUHK03 are very close (some method's mAP are even higher than rank-1 accuracy). But on other datasets such as Market-1501 and DukeMTMC-reID, mAP is usually 10%~15% lower than rank-1 accuracy.
What is the reason for higher mAP on CUHK03?
I use the evaluation code to test my model trained on CUHK03, the mAP is also 10%~15% lower than rank-1 accuracy, which is inconsistent with the values reported.

@zhunzhong07
Copy link
Owner

In the CUHK03 new protocol, there have few ground truth images for each identity so that the mAP will be close to the rank-1 accuracy. Did you use the new setting in the evaluation of your model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants