This is an official PyTorch implementation of the paper "Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences", ICCV 2021.
For more details, visit our project site or see our paper.
- Python 3.8
- PyTorch 1.7.1
- GPU memory >= 11GB
First, clone our git repository.
git clone https://github.com/cvlab-yonsei/LbA.git
cd LbA
We provide a Dockerfile to help reproducing our results easily.
- SYSU-MM01: download from this link.
- For SYSU-MM01, you need to preprocess the .jpg files into .npy files by running:
python utils/pre_preprocess_sysu.py --data_dir /path/to/SYSU-MM01
- Modify the dataset directory below accordingly.
- L63 of
train.py
- L54 of
test.py
- L63 of
- For SYSU-MM01, you need to preprocess the .jpg files into .npy files by running:
-
run
python train.py --method full
-
Important:
- Performances reported during training does not reflect exact performances of your model. This is due to 1) evaluation protocols of the datasets and 2) random seed configurations.
- Make sure you seperately run
test.py
to obtain correct results to be reported in your paper.
- Performances reported during training does not reflect exact performances of your model. This is due to 1) evaluation protocols of the datasets and 2) random seed configurations.
- run
python test.py --method full
- The results should be around:
dataset | method | mAP | rank-1 |
---|---|---|---|
SYSU-MM01 | baseline | 49.54 | 50.43 |
SYSU-MM01 | full | 54.14 | 55.41 |
- Download [SYSU-MM01]
- The results should be:
dataset | method | mAP | rank-1 |
---|---|---|---|
SYSU-MM01 | full | 55.22 | 56.31 |
@inproceedings{park2021learning,
title={Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences},
author={Park, Hyunjong and Lee, Sanghoon and Lee, Junghyup and Ham, Bumsub},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={12046--12055},
year={2021}
}