Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you use cuhk03 evaluation metric or market1501 evaluation metric? #13

Closed
Wanggcong opened this issue Mar 1, 2018 · 4 comments
Closed

Comments

@Wanggcong
Copy link

No description provided.

@layumi
Copy link
Owner

layumi commented Mar 1, 2018

The evaluation metric is wrote for Market1501, DukeMTMC-reID and CUHK03-NP.
It is not proper for CUHK03. It needs training 20 models (based on different training set) and testing 20 models (different testing set). It reports the average rank-1, rank-5 and rank-10 score.
If you want to run CUHK03, you may refer to the caffe evaluation code (You may google it).

@Wanggcong
Copy link
Author

Maybe I do not clarify the point. The cuhk03 and market evaluation metric I mentioned above are two different methods that compute the cmc curve. See the url: https://cysu.github.io/open-reid/notes/evaluation_metrics.html.

I wonder which metric you use in these experiments. ( I guess it is the market evaluation metric.)

@layumi
Copy link
Owner

layumi commented Mar 3, 2018

Hi @Wanggcong ,
Since the code runs on Market1501, we use the evaluation metric based on Market1501.
I converted the original code(Matlab version) to python version.
I checked the result. It achieves the same Rank1 and mAP when evaluating on the original Matlab code.

Note that the evaluation code in Open re-id may be slightly different from me.

@Wanggcong
Copy link
Author

Thanks for your help! @layumi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants