Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross-dataset testing? #4

Closed
ghost opened this issue Jul 9, 2019 · 7 comments
Closed

Cross-dataset testing? #4

ghost opened this issue Jul 9, 2019 · 7 comments

Comments

@ghost
Copy link

ghost commented Jul 9, 2019

Have you tried cross-dataset testing - training on one dataset, say market1501 and testing on another, say cuhk03 or duke? Have you come across any model with code which has tried this? Will be really grateful to you, thanks in advance.

@xiaodongyang
Copy link
Contributor

Yes, please refer to the table below.

Screen Shot 2019-07-09 at 9 58 03 AM

@ghost
Copy link
Author

ghost commented Jul 10, 2019

Thanks for a quick response. I am having trouble understanding this table. Can you please explain it a bit? Also, there is no mention of cross data-set testing in the paper, do you know a possible reason why these numbers are so low as compared to the the numbers in the Table 4 of the paper. Why inter data-set is still such a big challenge?

Also, what steps should I be taking if I have to make a person re-id system based on your repository to be able to work in it on a custom dataset?

@xiaodongyang
Copy link
Contributor

xiaodongyang commented Jul 11, 2019

"Ours" in that table are the direct-transfer results, e.g., Market->Duke: the network is trained on Market and directly tested on Duke with no domain adaptation. Compared to the supervised single domain setting, apparently, the large domain gap between the two datasets drags down the accuracy. So to improve the performance on a custom dataset, domain adaption is needed.

@ghost
Copy link
Author

ghost commented Jul 11, 2019

Thanks a lot, @xiaodongyang . By domain adaptation, do you mean that I can fine-tune the model trained by you on my custom data-set?

@xiaodongyang
Copy link
Contributor

Finetuning is one way, but people are more interested in unsupervised domain adaptation, i.e., no annotation on custom data. You can find quite a few recent re-id papers on this topic.

@layumi
Copy link
Contributor

layumi commented Jul 12, 2019

@anant15

Here is some brief idea about adaption and the table of state-of-the-art methods.

The primary motivation is that collecting ID annotation is relatively-expensive in human resource and time cost.

Is it possible to use less annotation on the target dataset, especially ID labels? If we have a model with good scalability, it could work on different datasets.

Our method (DG-Net) does not see any target data, and only trained on the source dataset. Other methods may adapt the clustering methods to further fine-tune the model on target data.

Methods Use DukeMTMC Training Data Rank@1 mAP Reference
UMDL ✔️ 18.5% 7.3% "Unsupervised cross-dataset transfer learning for person re-identification", Peng Peixi, Tao Xiang, Yaowei Wang, Massimiliano Pontil, Shaogang Gong, Tiejun Huang, and Yonghong Tian, CVPR 2016
Verif + Identif ✖️ 25.7% 12.8% "A Discriminatively Learned Cnn Embedding for Person Re-identification", Zhedong Zheng, Liang Zheng, and Yi Yang, TOMM 2017. [pytorch code]
PUL ✔️ 30.4% 16.8% "Unsupervised Person Re-identification: Clustering and Fine-tuning", Hehe Fan, Liang Zheng, Yi Yang, TOMM2018 [code]
PN-GAN ✖️ 29.9% 15.8% "Pose-Normalized Image Generation for Person Re-identification" Xuelin Qian, Yanwei Fu, Tao Xiang, Wenxuan Wang, Jie Qiu, Yang Wu, Yu-Gang Jiang, Xiangyang Xue, ECCV 2018
SPGAN ✔️ 41.4% 22.3% "Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification", Weijian Deng, Liang Zheng, Guoliang Kang, Yi Yang, Qixiang Ye, Jianbin Jiao, CVPR 2018
TJ-AIDL ✔️ 44.3% 23.0% "Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-Identification", Jingya Wang, Xiatian Zhu, Shaogang Gong, Wei Li, ECCV 2018
MMFA ✔️ 45.3% 24.7% "Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification", Shan Lin, Haoliang Li, Chang-Tsun Li, Alex Chichung Kot, BMVC 2018
DG-Net ✖️ 43.5% 25.4% "Joint Discriminative and Generative Learning for Person Re-identification", Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang and Jan Kautz, CVPR 2019.
SPGAN+LMP ✔️ 46.4% 26.2%
HHL ✔️ 46.9% 27.2% "Generalizing A Person Retrieval Model Hetero- and Homogeneously", Zhun Zhong, Liang Zheng, Shaozi Li, Yi Yang, ECCV 2018
BUC ✔️ 47.4% 27.5% "A Bottom-up Clustering Approach to Unsupervised Person Re-identification", Yutian Lin, Xuanyi Dong, Liang Zheng,Yan Yan, Yi Yang, AAAI 2018
CFSM ✔️ 49.8% 27.3% "Disjoint Label Space Transfer Learning with Common Factorised Space", Xiaobin Chang, Yongxin Yang, Tao Xiang, Timothy M. Hospedales, AAAI 2019
ARN ✔️ 60.2% 33.4% "Adaptation and Re-Identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-Identification", Yu-Jhe Li, Fu-En Yang, Yen-Cheng Liu, Yu-Ying Yeh, Xiaofei Du, and Yu-Chiang Frank Wang, CVPR 2018 Workshop
TAUDL ✔️ 61.7% 43.5% "Unsupervised Person Re-identification by Deep Learning Tracklet Association", Minxian Li, Xiatian Zhu, and Shaogang Gong, ECCV 2018
UDARTP ✔️ 68.4% 49.0% "Unsupervised Domain Adaptive Re-Identification: Theory and Practice", Liangchen Song, Cheng Wang, Lefei Zhang, Bo Du, Qian Zhang, Chang Huang, and Xinggang Wang, arXiv:1807.11334

@xiaodongyang
Copy link
Contributor

@anant15 Please let us know if you have any other questions, otherwise, please close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants