Skip to content

VCM-project233/MITML

Repository files navigation

Learning Modal-Invariant and Temporal-Memory for Video-based Visible-Infrared Person Re-Identification

This is the official implementation of our paper 'Learning Modal-Invariant and Temporal-Memory for Video-based Visible-Infrared Person Re-Identification'.

Usage

  • Usage of this code is free for research purposes only.
  • This project is based on DDAG[1] (paper and official code).
  • Download and prepare data VCM-HITSZ.
  • Download weights of MITML.
  • To begin testing.(See the code for more details).
    python test.py
  • Please cite our paper for usage.
@inproceedings{lin2022learning,
 title={Learning Modal-Invariant and Temporal-Memory for Video-Based Visible-Infrared Person Re-Identification},
 author={Lin, Xinyu and Li, Jinxing and Ma, Zeyu and Li, Huafeng and Li, Shuang and Xu, Kaixiong and Lu, Guangming and Zhang, David},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 pages={20973--20982},
 year={2022}
}
  • Reference
     [1]Ye M, Shen J, J. Crandall D, et al. Dynamic dual-attentive aggregation learning for visible-infrared person re-identification[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16. Springer International Publishing, 2020: 229-247.
    

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages