This repository contains official implementation for the paper: Temporal 3D Shape Modeling for Video-based Cloth-changing Person Re-Identification (SEMI) at WACV'24 - 4th Real-World Surveillance Workshop.
c2dres50
: C2DResNet50i3dres50
: I3DResNet50ap3dres50
: AP3DResNet50nlres50
: NLResNet50ap3dnlres50
: AP3DNLResNet50
This baseline currently supports two public VCCRe-ID datasets: VCCR and CCVID.
Dataset | Paper | Num.IDs | Num.Tracklets | Num.Clothes/ID | Public | Download |
---|---|---|---|---|---|---|
Motion-ReID | link | 30 | 240 | - | X | - |
CVID-reID | link | 90 | 2980 | - | X | - |
SCCVRe-ID | link | 333 | 9620 | 2~37 | X | - |
RCCVRe-ID | link | 34 | 6948 | 2~10 | X | - |
CCPG | link | 200 | ~16k | - | Per Request | project link |
CCVID | link | 226 | 2856 | 2~5 | Yes | link |
VCCR | link | 392 | 4384 | 2~10 | Yes | link |
First, create a virtual environment for the repository
conda create -n semi python=3.8
then activate the environment
conda activate semi
git clone https://github.com/dustin-nguyen-qil/Video-based-Cloth-Changing-ReID-Baseline.git
Next, install the dependencies by running ...
pip install -r requirements.txt
- Download the datasets VCCR and CCVID following download links above
- You need the pickle files containing the paths to sequence images, clothes id, identity and camera id of the sequences. To do this:
- Create a folder named
data
inside the repository - Run the following command line (Note: replace the path to the folder storing the datasets and the dataset name)
- Create a folder named
python datasets/prepare.py --root "/media/dustin/DATA/Research/Video-based ReID" --dataset_name vccr
If you want to see the evaluation results with our pretrained model on VCCR, follow these steps:
- Download our pretrained model from here (password: dustinqil), put it in
work_space/save
. - Replace the path to the pretrained model in
test.py
- Run
python test.py
- Evaluation results will be saved to
work_space/output
Go to ./config.py
to modify configurations if needed: Dataset (VCCR or CCVID), number of epochs, batch size learning rate, CNN backbone (according to model names above), etc.
Create a folder named work_space
as below.
Download the pretrained SPIN model and the SMPL mean parameters needed to train the 3D regressor from here (password: dustinqil). Put it inside work_space
.
data
work_space
|--- save
|--- output
|--- tsm
main.sh
bash main.sh
- Checkpoints will be automatically saved to
work_space/lightning_logs
. - Trained model will be automatically saved to
work_space/save
. - Testing results will be automatically saved to
work_space/output
.
If you want to train from checkpoint, add checkpoint path to RESUME in config.py
.
If you find this repo helpful, please cite:
@InProceedings{Nguyen_2024_WACV,
author = {Nguyen, Vuong D. and Mantini, Pranav and Shah, Shishir K.},
title = {Temporal 3D Shape Modeling for Video-Based Cloth-Changing Person Re-Identification},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
month = {January},
year = {2024},
pages = {173-182}
}
Related repos: