Skip to content
Pytorch code for our CVPR'19 work: MAR (MultilAbel Reference learning)
Branch: master
Clone or download
KovenYu KovenYu
Latest commit a80ffcb Mar 23, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data init Mar 16, 2019
img init Mar 16, 2019
runs/debug
src 1. automatically save computed multilabel Mar 22, 2019
README.md 1. a cleaner pretrained weight file (uploaded in BAIDUPAN) Mar 23, 2019

README.md

MAR: MultilAbel Reference Learning

This repo contains the source code for our CVPR'19 work Unsupervised person re-identification by soft multilabel learning (Please find the paper as well as the supplementary material in this project page). Our implementation is based on Pytorch. In the following is an instruction to use the code to train and evaluate the MAR model on the Market-1501 dataset.

Prerequisites

  1. Pytorch 1.0.0
  2. Python 3.6+
  3. Python packages: numpy, scipy, pyyaml/yaml, h5py
  4. [Optional] MATLAB, if you need to customize used datasets.

Data preparation

(If you simply want to run the demo code without further modification, you might skip this step by downloading all required data from BaiduPan with password "tih8", and put all of them into /data/)

  1. Pretrained model

    Please find the pretrained model (pretrained using softmax loss on MSMT17) here (password: tih8). After downloading pretrained_weight.pth, please put it into /data/.

  2. Target dataset

    Download the Market-1501 dataset, and unzip it into /data. After this step, you should have a folder structure:

    • data
      • Market-1501-v15.09.15
        • bounding_box_test
        • bounding_box_train
        • query

    Then run /data/construct_dataset_Market.m in MATLAB. If you prefer to use another dataset, just modify the MATLAB code accordingly. Again, the processed Market-1501 and DukeMTMC-reID are available here.

  3. Auxiliary (source) dataset

    Download the MSMT17 dataset, and unzip it into /data. After this step, you should have a folder structure:

    • data
      • MSMT17_V1
        • train
        • test
        • list_train.txt
        • list_query.txt
        • list_gallery.txt

    Then run /data/construct_dataset_MSMT17.m in MATLAB. If you prefer to use another dataset, just modify the MATLAB code accordingly. Again, the processed MSMT17 is available here.

Run the code

Please enter the main folder, and run

python src/main.py --gpu 0,1,2,3 --save_path runs/debug

where "0,1,2,3" specifies your gpu IDs. If you are using gpus with 12G memory, you need 4 gpus to run in the default setting (batchsize=368). Please also note that since I load the whole datasets into cpu memory to cut down IO overhead, you need at least 40G cpu memory. Hence I recommend you run it on a server.

Reference

If you find our work helpful in your research, please kindly cite our paper:

Hong-Xing Yu, Wei-Shi Zheng, Ancong Wu, Xiaowei Guo, Shaogang Gong and Jian-Huang Lai, "Unsupervised person re-identification by soft multilabel learning", In CVPR, 2019.

bib:

@inproceedings{yu2019unsupervised,
  title={Unsupervised Person Re-identification by Soft Multilabel Learning},
  author={Yu, Hong-Xing and Zheng, Wei-Shi and Wu, Ancong and Guo, Xiaowei and Gong, Shaogang and Lai, Jianhuang},
  year={2019},
  booktitle={IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)},
}

If you have any problem/question, please feel free to contact me at xKoven@gmail.com or open an issue.

You can’t perform that action at this time.