Skip to content

liyuke65535/Part-Aware-Transformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Part-Aware-Transformer

Official repo for "Part-Aware Transformer for Generalizable Person Re-identification" [ICCV 2023]

Abstract

Domain generalization person re-identification (DG-ReID) aims to train a model on source domains and generalize well on unseen domains. Vision Transformer usually yields better generalization ability than common CNN networks under distribution shifts. However, Transformer-based ReID models inevitably over-fit to domain-specific biases due to the supervised learning strategy on the source domain. We observe that while the global images of different IDs should have different features, their similar local parts (e.g., black backpack) are not bounded by this constraint. Motivated by this, we propose a pure Transformer model (termed Part-aware Transformer) for DG-ReID by designing a proxy task, named Cross-ID Similarity Learning (CSL), to mine local visual information shared by different IDs. This proxy task allows the model to learn generic features because it only cares about the visual similarity of the parts regardless of the ID labels, thus alleviating the side effect of domain-specific biases. Based on the local similarity obtained in CSL, a Part-guided Self-Distillation (PSD) is proposed to further improve the generalization of global features. Our method achieves state-of-the-art performance under most DG ReID settings.

Framework

Visualizations

Instructions

Here are some instructions to run our code. Our code is based on TransReID, thanks for their excellent work.

1. Clone this repo

git clone https://github.com/liyuke65535/Part-Aware-Transformer.git

2. Prepare your environment

conda create -n pat python==3.10
conda activate pat
bash enviroments.sh

3. Prepare pretrained model (ViT-B) and datasets

You can download it from huggingface, rwightman, or else where. For example, pretrained model is avaliable at ViT-B.

As for datasets, follow the instructions in MetaBIN.

4. Modify the config file

# modify the model path and dataset paths of the config file
vim ./config/PAT.yml

5. Train a model

bash run.sh

6. Evaluation only

# modify the trained path in config
vim ./config/PAT.yml

# evaluation
python test.py --config ./config/PAT.yml

Citation

If you find this repo useful for your research, you're welcome to cite our paper.

@inproceedings{ni2023part,
  title={Part-Aware Transformer for Generalizable Person Re-identification},
  author={Ni, Hao and Li, Yuke and Gao, Lianli and Shen, Heng Tao and Song, Jingkuan},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={11280--11289},
  year={2023}
}

About

[ICCV 2023] An official implementation for "Part-Aware Transformer for Generalizable Person Re-identification"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published