Skip to content
Code for our CVPR 2018 paper - Human Semantic Parsing for Person Re-identification
Branch: master
Clone or download
Latest commit bdb5ddf Aug 3, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
checkpoints
data/dump Update README.md Jun 30, 2018
evaluation_features Update README.md Jun 30, 2018
evaluation_list Delete README.md Jun 13, 2018
train_list Delete README.md Jun 13, 2018
LICENSE Create LICENSE Aug 2, 2018
README.md Update README.md Jun 30, 2018
datachef.py Add files via upload Jun 8, 2018
main.py Update main.py Jun 13, 2018
modelx.py Add files via upload Jun 8, 2018

README.md

Human Semantic Parsing for Person Re-identification

Code for our CVPR 2018 paper - Human Semantic Parsing for Person Re-identification

We have used Chainer framework for the implementation. SPReIDw/fg and SPReIDw/fg-ft results mentioned in Table 5 (with weight sharing setting) in the paper can be reproduced using this code.

Please use the links below to download the semantic parsing model (LIP_iter_30000.chainermodel) and the inceptionv3 weights pre-trained on imagenet (data/dump/):

Directories & Files

/
├── checkpoints/  # checkpoint models are saved into this directory
│
├── data/dump/  # inceptionv3 weights pre-trained on imagenet. download using this [link] (https://www.dropbox.com/sh/x0ey09q1nq7ci39/AACRuJa_f8N0_gIFcEWZUZ7ja?dl=0)
│
├── evaluation_features/ # extracted features are saved into this directory
│
├── evaluation_list/ # there are two image lists to extract features for each evaluation datasets, one for gallery and one for query
│   ├── cuhk03_gallery.txt
│   ├── cuhk03_query.txt
│   ├── duke_gallery.txt
│   ├── duke_query.txt
│   ├── market_gallery.txt
│   └── market_query.txt
│
├── train_list/ # image lists to train the models
│   ├── train_10d.txt # training images collected from 10 datasets
│   ├── train_cuhk03.txt # training images from cuhk03
│   ├── train_duke.txt # training images from duke
│   └── train_market.txt # training images from market
│
├── LIP_iter_30000.chainermodel # download this model using this [link](https://www.dropbox.com/s/nw5h0lw6xrzp5ks/LIP_iter_30000.chainermodel?dl=0)
├── datachef.py
├── main.py
└── modelx.py

Train

cd $SPREID_ROOT
# train SPReID on 10 datasets
python main.py --train_set "train_10d" --label_dim "16803" --scales_reid "512,170" --optimizer "lr:0.01--lr_pretrained:0.01" --dataset_folder "/path/to/the/dataset"
# fine-tune SPReID on evaluation datasets (Market-1501, DukeMTMC-reID, CUHK03) with high-resolution images
python main.py --train_set "train_market" --label_dim_ft "751" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"
python main.py --train_set "train_duke" --label_dim_ft "702" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"
python main.py --train_set "train_cuhk03" --label_dim_ft "1367" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"

Feature Extraction

cd $SPREID_ROOT
# Extract features using the model trained on 10 datasets. You should run this command two times for each dataset using --eval_split "DATASET_gallery" and --eval_split "DATASET_query"
python main.py --extract_features 1 --train_set "train_10d" --eval_split "market_gallery" --scales_reid "512,170" --checkpoint 200000 --dataset_folder "/path/to/the/dataset"
# Extract features using the models trained on evaluation datasets.
python main.py --extract_features 1 --train_set "train_market" --eval_split "market_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"
python main.py --extract_features 1 --train_set "train_duke" --eval_split "duke_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"
python main.py --extract_features 1 --train_set "train_cuhk03" --eval_split "cuhk03_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"

Results

Market-1501 CUHK03 DukeMTMC-reID
Model mAP(%)rank-1 mAP(%)rank-1 mAP(%)rank-1
SPReIDw/fg 77.6290.88 -87.69 65.6681.73
SPReIDw/fg-ft 80.5492.34 -89.68 69.2983.80

Citation

@InProceedings{Kalayeh_2018_CVPR,
author = {Kalayeh, Mahdi M. and Basaran, Emrah and Gökmen, Muhittin and Kamasak, Mustafa E. and Shah, Mubarak},
title = {Human Semantic Parsing for Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
You can’t perform that action at this time.