Skip to content

Flame-Chasers/TBPS-CLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

【AAAI 2024 🔥】An Empirical Study of CLIP for Text-based Person Search

Paper Paper

This repository offers the official implementation of TBPS-CLIP in PyTorch.

In the meantime, check out our related papers if you are interested:

Note

More experiments and implementation details are attached on the Appendix of the arXiv version.

Overview

By revisiting the critical design of data augmentation and loss function in CLIP, we provide a strong baseline TBPS-CLIP for text-based person search.

Environment

All the experiments are conducted on 4 Nvidia A40 (48GB) GPUs. The CUDA version is 11.7.

The required packages are listed in requirements.txt. You can install them using:

pip install -r requirements.txt

Download

  1. Download CUHK-PEDES dataset from here, ICFG-PEDES dataset from here and RSTPReid dataset from here.
  2. Download the annotation json files from here.
  3. Download the pretrained CLIP checkpoint from here.

Configuration

In config/config.yaml and config/s.config.yaml, set the paths for the annotation file, image path and the CLIP checkpoint path.

Training

You can start the training using PyTorch's torchrun with ease:

CUDA_VISIBLE_DEVICES=0,1,2,3 \
torchrun --rdzv_id=3 --rdzv_backend=c10d --rdzv_endpoint=localhost:0 --nnodes=1 --nproc_per_node=4 \
main.py

You can also easily run simplified version using:

CUDA_VISIBLE_DEVICES=0,1,2,3 \
torchrun --rdzv_id=3 --rdzv_backend=c10d --rdzv_endpoint=localhost:0 --nnodes=1 --nproc_per_node=4 \
main.py --simplified

Model Checkpoints

CUHK-PEDES ICFG-PEDES RSTPReid
TBPS-CLIP (ViT-B/16) Download Download Download
Simplified TBPS-CLIP (ViT-B/16) Download Download Download

Acknowledgement

  • CLIP The model architecture of TBPS-CLIP

Citation

If you find this paper useful, please consider staring 🌟 this repo and citing 📑 our paper:

@inproceedings{cao2024empirical,
  title={An Empirical Study of CLIP for Text-Based Person Search},
  author={Cao, Min and Bai, Yang and Zeng, Ziyin and Ye, Mang and Zhang, Min},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={1},
  pages={465--473},
  year={2024}
}

License

This code is distributed under an MIT LICENSE.

Releases

No releases published

Packages

No packages published