Unofficial implementation of CVPR 2021 Oral paper Unsupervised Multi-Source Domain Adaptation for Person Re-Identification
This repo provides the code to implement 3-source (duke+cuhk03+msmt -> market) domain adaptive person re-ID task. It runs on Python 3.6 with Pytorch 1.3.1. For other dependencies, see setup.py
.
# clone this repo
cd MSUDA_REID
python setup.py install
cd examples && mkdir data
Prepare DukeMTMC-reID, Market-1501, CUHK03 and MSMT17 datasets as in MMT.
Two 32GB V100 GPUs are used to train the 3-source adaptation. You can also reduce the batch size to fit your GPU memory.
bash scripts/multi_src_pretrain.sh 1
bash scripts/multi_src_pretrain.sh 2
bash scripts/multi_src_train_mmt_msuda.sh
Test the trained model with best performance by
bash scripts/test_msuda.sh
Method | mAP | R-1 | R-5 | R-10 |
---|---|---|---|---|
MMT+RDSBN-MDIF | 85.9 | 94.3 | 97.6 | 98.8 |
The best performance model can be downloaded from Baidu Drive password: n0cm.
Place the downloaded model in logs/dukemtmc_cuhk03_msmt17TOmarket1501/resnet50_rcdsbn_mdif-MMT-DBSCAN/
This repo borrows a lot of code from MMT, thanks!