This repository contains the code for paper "Resource Aware Person Re-identification across Multiple Resolutions" (CVPR 2018).
@inproceedings{wang2018resource,
title={Resource Aware Person Re-identification across Multiple Resolutions},
author={Wang, Yan and Wang, Lequn and You, Yurong and Zou, Xu and Chen, Vincent and Li, Serena and Huang, Gao and Hariharan, Bharath and Weinberger, Kilian Q},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8042--8051},
year={2018}
}
- Python3.6
- PyTorch(0.2.0)
- torchvision(0.2.0)
- Market1501 dataset
- MARS dataset
- CUHK03 dataset
- DukeMTMC-reID dataset
Use the following command to preprocess the person re-id dataset.
python create_market_dataset.py --path <root_path_of_dataset>
Use the following command to set up the training.
./train.sh <nettype> <GPU> <train_dataset_path> <checkpoint_name>
where <nettype>
can be either dare_R
or dare_D
Use the following command to load a trained model to generate features for each image (in .mat
format).
./extract_features.sh <nettype> <GPU> <dataset_path> <dataset> <checkpoint_name> <feature_path> <gen_stage_features>
where <nettype>
can be either dare_R
or dare_D
, <dataset>
can be one of [MARS, Market1501, Duke, CUHK03]
, <feature_path>
is the path to store extracted features. Toggle <gen_stage_features>
to Ture
to extract features from each stage.
Use person-re-ranking and MARS-evaluation official evaluation codes to evaluate the extracted features.
Note we use mean
rather than max
to aggregate the image feature vectors for video sequences.
Use the following command to run simulations under resource-aware person re-ID scenarios. See here for more information.
./budgeted_stream/simulation.sh <dataset_path> <feature_path>
We provide several pretrained models listed below:
- Market1501 Res50
- Market1501 Dense201
- MARS Res50
- MARS Dense201
- CUHK Detected Res50
- CUHK Detected Dense201
- CUHK Labeled Res50
- CUHK Labeled Dense201
- Duke Res50
- Duke Dense201
MIT