Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Market1501 test set not complete? #13

Closed
kilianyp opened this issue Jun 7, 2018 · 3 comments
Closed

Market1501 test set not complete? #13

kilianyp opened this issue Jun 7, 2018 · 3 comments

Comments

@kilianyp
Copy link

kilianyp commented Jun 7, 2018

Hi,
I noticed that there are only 15913 images in the test set. However, the original test set contains 19,732 images (http://www.liangzheng.org/Project/project_reid.html). Is this on purpose?

Thanks!

@kilianyp
Copy link
Author

kilianyp commented Jun 7, 2018

Also the numbers in the README do not match (https://github.com/huanghoujing/person-reid-triplet-loss-baseline#training-time).
In practice, the number of training images is fine though!

market1501 trainval set

NO. Images: 12936
NO. IDs: 751

This is logged for the test set:
market1501 test set

NO. Images: 31969
NO. IDs: 751
NO. Query Images: 3368 (correct)
NO. Gallery Images: 15913 (not correct)
NO. Multi-query Images: 12688

@kilianyp
Copy link
Author

kilianyp commented Jun 8, 2018

EDIT:
Okay, sorry for the confusion, this seems to be standard testing procedure
https://github.com/VisualComputingInstitute/triplet-reid/blob/master/excluders/market1501.py#L29

I did some digging, and it seems like it is because of the -1 images that are filtered out.
There are 3819 images starting with -1 in the test set:
15913 + 3819 = 19732

This also decreases performance by quite some margin!!
I downloaded the weights you had uploaded to evaluate for stride 1.
I am also using scikit-learn 0.19.1!!

python script/experiment/train.py
-d '(0,)'
--only_test true
--dataset market1501
--last_conv_stride 1
--normalize_feature false
--exp_dir training/baseline_huangjong/
--model_weight_file market1501-huanghoujing/model_weight.pth

market1501 test set

NO. Images: 35788
NO. IDs: 752
NO. Query Images: 3368
NO. Gallery Images: 19732
NO. Multi-query Images: 12688

Loaded model weights from market1501-huanghoujing/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1100/1119 batches done, +1.97s, total 103.59s
Done, 105.52s
Computing distance...
Done, 1.31s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 14.88s
Single Query: [mAP: 70.76%], [cmc1: 82.42%], [cmc5: 94.92%], [cmc10: 96.76%]
Multi Query, Computing distance...
Done, 1.31s
Multi Query, Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 15.00s
Multi Query: [mAP: 77.31%], [cmc1: 86.16%], [cmc5: 96.94%], [cmc10: 98.13%]

@huanghoujing
Copy link
Owner

Thank you very much for attention. In Market-1501, id -1 should not be used in testing, referring to the official site:

  1. What are images beginning with "0000" and "-1"?
    Ans: Names beginning with "0000" are distractors produced by DPM false detection.
    Names beginning with "-1" are junks that are neither good nor bad DPM bboxes.
    So "0000" will have a negative impact on accuracy, while "-1" will have no impact.
    During testing, we rank all the images in "bounding_box_test". Then, the junk images are just neglected; distractor images are not neglected.

Since we would neglect all images with id -1, I simply do not include them in the test set. These two methods are equivalent.

@kilianyp kilianyp closed this as completed Jun 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants