In this work, we propose a simple yet effective approach to adapt CLIP for supervised Re-ID, which directly fine-tunes the image encoder of CLIP using a Prototypical Contrastive Learning loss. Experimental results demonstrate the simplicity and competitiveness of our method compared to recent prompt-learning-based CLIP-ReID. Futhermore, our investigation indicates the essential consistency between the CLIP-ReID and our method.
- 2023/11/23: Full model (ID loss + CC loss) is ready.
Install conda
before installing any requirements.
conda create -n pclclip python=3.9
conda activate pclclip
pip install -r requirements.txt
Make a new folder named data
under the root directory. Download the datasets and unzip them into data
folder.
For example, training the full model on Market1501 with GPU 0 and saving the log file and checkpoints to logs/market-pclclip
:
CUDA_VISIBLE_DEVICES=0 python train_pcl.py --config_file config/pcl-vit.yml DATASETS.NAMES "('market1501')" OUTPUT_DIR logs/market-pclclip
Configs can be modified from config/*.yaml
files or from command line. If you want to add more config terms, update config/defaults.py
. For other models using different losses, please modify the code according to the paper.
The code is implemented based on following works. We sincerely appreciate their great efforts on Re-ID research!
@article{li2023prototypical,
title={Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification},
author={Li, Jiachen and Gong, Xiaojin},
journal={arXiv preprint arXiv:2310.17218},
year={2023}
}