Skip to content

fqzhu001/caffe-pdh-reid-2016

Repository files navigation

caffe-pdh-reid-2016

A caffe-based implementation for the baseline and proposed PDH method, providing whole training, testing and evaluation codes on Market-1501 dataset.

The compile of Caffe can be reffered at the caffe.

If you have complie problem, please remove the folder examples/market1501/evaluate/KISSME/.

Note: this work was done in late 2015 when we were trying the "verification" Network to learn embeddings. Later, we turn to the "identification" models and obtain more competitive results. Interested readers can also refer to our works on the "identification" models [1], [2], [3], [4], [5], [6].

References

[1] L. Zheng, H. Zhang, S. Sun, M. Chandraker, Y. Yang, and Q. Tian, Person re-identification in the wild, in Proc. CVPR, 2017.

[2] L. Zheng, Z. Bie, Y. Sun, J. Wang, S. Wang, C. Su, and Q. Tian, Mars: A video benchmark for large-scale person re-identification, in Proc. ECCV, 2016, pp. 868–884.

[3] L. Zheng, Y. Yang, and A. G. Hauptmann, Person re-identification: Past, present and future, arXiv preprint arXiv:1610.02984, 2016.

[4] Z. Zheng, L. Zheng, and Y. Yang, Unlabeled samples generated by gan improve the person re-identification baseline in vitro, arXiv preprint arXiv:1701.07717, 2017.

[5] Y. Lin, L. Zheng, Z. Zheng, Y. Wu, and Y. Yang, Improving person re-identification by attribute and identity learning, arXiv preprint arXiv:1703.07220, 2017.

[6] Y. Sun, L. Zheng, W. Deng, and S. Wang, Svdnet for pedestrian retrieval, arXiv preprint arXiv:1703.05693, 2017.

Data Preparation

Directions

The prototxt can be found in examples/market1501/prototxt/.

Extract features for query and bounding_box_test can be found in examples/market1501/feature_extract/.

The trained models can be saved in examples/market1501/snapshot/, the folder "snapshot" will be built by yourself.

Evaluation can be found in examples/market1501/evaluation/.

Baseline

  • run examples/market1501/data_prepare/create_market1501-train_baseline.sh to generate training lmdb data for training CNN model
  • run examples/market1501/train_baseline_512bit.sh for training CNN model at the length of 512 bits hash codes
  • run examples/market1501/feature_extract/extract_query_baseline.py and examples/market1501/feature_extract/extract_test_baseline.py for extracting features of query and test data
  • run examples/market1501/hashcode_query_512bit_baseline.m and examples/market1501/hashcode_test_512bit_baseline.m for generating binary hash codes of query and test data
  • run examples/market1501/evaluation/main_single_query.m to evaluate the performance of baseline on Market-1501

Final results are : mAP = 0.1237, r1 precision = 0.2536 [Hamming distance]. Note: this work was done in late 2015 when the performance of baseline is relatively low. However, our following PDH method has a significantly improvement compared with baseline.

The proposed PDH method (Note: we just take dividing the entire image into overlap 4 parts for example.)

  • run examples/market1501/data_prepare/generate_parts.m to generate 4 parts for each image and save in folders named part_1, part_2, part_3 and part_4, under the path examples/market1501/Market-1501-v15.09.15/bounding_box_train, examples/market1501/Market-1501-v15.09.15/query and examples/market1501/Market-1501-v15.09.15/bounding_box_testrespectively
  • run examples/market1501/data_prepare/create_market1501-train_part_1.sh, examples/market1501/data_prepare/create_market1501-train_part_2.sh, examples/market1501/data_prepare/create_market1501-train_part_3.sh and examples/market1501/data_prepare/create_market1501-train_part_4.sh to generate training lmdb data for training part-based CNN model
  • run examples/market1501/train_PDH_part_1.sh, examples/market1501/train_PDH_part_2.sh, examples/market1501/train_PDH_part_3.sh and examples/market1501/train_PDH_part_4.sh sequentially for training part-based CNN model
  • run examples/market1501/feature_extract/PDH_extract_query.py and examples/market1501/feature_extract/PDH_extract_test.py for extracting features of query and test data
  • run examples/market1501/PDH_extract_hashcode_query.m and examples/market1501/PDH_extract_hashcode_test.m for generating binary hash codes of query and test data (including hash codes concatenation)
  • run examples/market1501/evaluation/main_single_query.m to evaluate the performance of proposed PDH method on Market-1501

Final results are : mAP = 0.2606, r1 precision = 0.4789 [Hamming distance].

Citation

Please cite this paper in your publications if it helps your research:

@article{zhu2017part,
  title={Part-based deep hashing for large-scale person re-identification},
  author={Zhu, Fuqing and Kong, Xiangwei and Zheng, Liang and Fu, Haiyan and Tian, Qi},
  journal={IEEE Transactions on Image Processing},
  volume={26},
  number={10},
  pages={4806--4817},
  year={2017},
  doi={10.1109/TIP.2017.2695101},
  publisher={IEEE}
}

If you have any problem, please contact me at fqzhu001@gmail.com.

Caffe

Build Status License

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published