Skip to content

dustin-nguyen-qil/SEMI_VCCReID

Repository files navigation

Temporal 3D Shape Modeling for Video-based Cloth-Changing Person Re-Identification (SEMI)

This repository contains official implementation for the paper: Temporal 3D Shape Modeling for Video-based Cloth-changing Person Re-Identification (SEMI) at WACV'24 - 4th Real-World Surveillance Workshop.

1. Features

Supported CNN backbones

  • c2dres50: C2DResNet50
  • i3dres50: I3DResNet50
  • ap3dres50: AP3DResNet50
  • nlres50: NLResNet50
  • ap3dnlres50: AP3DNLResNet50

Summary of VCCRe-ID datasets

This baseline currently supports two public VCCRe-ID datasets: VCCR and CCVID.

Dataset Paper Num.IDs Num.Tracklets Num.Clothes/ID Public Download
Motion-ReID link 30 240 - X -
CVID-reID link 90 2980 - X -
SCCVRe-ID link 333 9620 2~37 X -
RCCVRe-ID link 34 6948 2~10 X -
CCPG link 200 ~16k - Per Request project link
CCVID link 226 2856 2~5 Yes link
VCCR link 392 4384 2~10 Yes link

2. Running instructions

2.1. Getting started

Create virtual environment

First, create a virtual environment for the repository

conda create -n semi python=3.8

then activate the environment

conda activate semi

Clone the repository

git clone https://github.com/dustin-nguyen-qil/Video-based-Cloth-Changing-ReID-Baseline.git

Next, install the dependencies by running ...

pip install -r requirements.txt

2.2. Data Preparation

  1. Download the datasets VCCR and CCVID following download links above
  2. You need the pickle files containing the paths to sequence images, clothes id, identity and camera id of the sequences. To do this:
    • Create a folder named data inside the repository
    • Run the following command line (Note: replace the path to the folder storing the datasets and the dataset name)
python datasets/prepare.py --root "/media/dustin/DATA/Research/Video-based ReID" --dataset_name vccr

2.3. Run evaluation only to reproduce results presented in the paper

If you want to see the evaluation results with our pretrained model on VCCR, follow these steps:

  • Download our pretrained model from here (password: dustinqil), put it in work_space/save.
  • Replace the path to the pretrained model in test.py
  • Run
python test.py
  • Evaluation results will be saved to work_space/output

2.4. Run training and testing

Configuration options

Go to ./config.py to modify configurations if needed: Dataset (VCCR or CCVID), number of epochs, batch size learning rate, CNN backbone (according to model names above), etc.

Preparation

Create a folder named work_space as below.

Download the pretrained SPIN model and the SMPL mean parameters needed to train the 3D regressor from here (password: dustinqil). Put it inside work_space.

data
work_space
|--- save
|--- output
|--- tsm
main.sh

Run

bash main.sh
  • Checkpoints will be automatically saved to work_space/lightning_logs.
  • Trained model will be automatically saved to work_space/save.
  • Testing results will be automatically saved to work_space/output.

If you want to train from checkpoint, add checkpoint path to RESUME in config.py.

Citation

If you find this repo helpful, please cite:

@InProceedings{Nguyen_2024_WACV,
    author    = {Nguyen, Vuong D. and Mantini, Pranav and Shah, Shishir K.},
    title     = {Temporal 3D Shape Modeling for Video-Based Cloth-Changing Person Re-Identification},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
    month     = {January},
    year      = {2024},
    pages     = {173-182}
}

Acknowledgement

Related repos:

About

Official implementation for: Temporal 3D Shape Modeling for Video-based Cloth-changing Person Re-Identification

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages