Skip to content
An out-of-box human parsing representation extractor. This is also the CVPR2019 3rd LIP challenge winner solution!
Python
Branch: master
Clone or download
Latest commit 767e100 Nov 21, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
img Initial Commit Sep 4, 2019
input Initial Commit Sep 4, 2019
output Initial Commit Sep 4, 2019
LICENSE Create LICENSE Nov 20, 2019
README.md Update README.md Oct 26, 2019
datasets.py Initial Commit Sep 4, 2019
evaluate.py Initial Commit Sep 4, 2019
model.py Initial Commit Sep 4, 2019

README.md

Self Correction for Human Parsing

An out-of-box human parsing representation extractor. Also the 3rd LIP challenge winner solution!

lip-visualization

At this time, we provide the trained models on three popular human parsing datasets that achieve the state-of-the-art performance. We hope our work could serve as a basic human parsing representation extractor and facilitate your own tasks, e.g. Fashion AI, Person Re-Identification, Virtual Reality, Virtual Try-on, Human Analysis and so on.

Citation

Please cite our work if you find this repo useful in your research.

@article{li2019self,
  title={Self-Correction for Human Parsing},
  author={Li, Peike and Xu, Yunqiu and Wei, Yunchao and Yang, Yi},
  journal={arXiv preprint arXiv:1910.09777},
  year={2019}
}

TODO List

  • Inference code on three popular single person human parsing datasets.
  • Training code
  • Extension on multi-person and video human parsing tasks.

Coming Soon! Stay tuned!

Requirements

Python >= 3.5, PyTorch >= 0.4

Trained models

The easiest way to get started is to use our trained SCHP models on your own images to extract human parsing representations. Here we provided trained models on three popular datasets. Theses three datasets have different label system, you can choose the best one to fit on your own task.

LIP (exp-schp-201908261155-lip.pth)

  • mIoU on LIP validation: 59.36 %.

  • LIP is the largest single person human parsing dataset with 50000+ images. This dataset focus more on the complicated real scenarios. LIP has 20 labels, including 'Background', 'Hat', 'Hair', 'Glove', 'Sunglasses', 'Upper-clothes', 'Dress', 'Coat', 'Socks', 'Pants', 'Jumpsuits', 'Scarf', 'Skirt', 'Face', 'Left-arm', 'Right-arm', 'Left-leg', 'Right-leg', 'Left-shoe', 'Right-shoe'.

ATR (exp-schp-201908301523-atr.pth)

  • mIoU on ATR test: 82.29%.

  • ATR is a large single person human parsing dataset with 17000+ images. This dataset focus more on fashion AI. ATR has 18 labels, including 'Background', 'Hat', 'Hair', 'Sunglasses', 'Upper-clothes', 'Skirt', 'Pants', 'Dress', 'Belt', 'Left-shoe', 'Right-shoe', 'Face', 'Left-leg', 'Right-leg', 'Left-arm', 'Right-arm', 'Bag', 'Scarf'.

Pascal-Person-Part (exp-schp-201908270938-pascal-person-part.pth)

  • mIoU on Pascal-Person-Part validation: 71.46 %.

  • Pascal Person Part is a tiny single person human parsing dataset with 3000+ images. This dataset focus more on body parts segmentation. Pascal Person Part has 7 labels, including 'Background', 'Head', 'Torso', 'Upper Arms', 'Lower Arms', 'Upper Legs', 'Lower Legs'.

Choose one and have fun on your own task!

Inference

To extract the human parsing representation, simply put your own image in the Input_Directory, download a pretrained model and run the following command. The output images with the same file name will be saved in Output_Directory

python evaluate.py --dataset Dataset --restore-weight Checkpoint_Path --input Input_Directory --output Output_Directory

The Dataset command has three options, including 'lip', 'atr' and 'pascal'. Note each pixel in the output images denotes the predicted label number. The output images have the same size as the input ones. To better visualization, we put a palette with the output images. We suggest you to read the image with PIL.

If you need not only the final parsing image, but also a feature map representation. Add --logits command to save the output feature map. This feature map is the logits before softmax layer with the dimension of HxWxC.

Visualization

  • Source Image. demo

  • LIP Parsing Result. demo-lip

  • ATR Parsing Result. demo-atr

  • Pascal-Person-Part Parsing Result. demo-pascal

Related

There is also a PaddlePaddle Implementation. This implementation is the version that we submitted to the 3rd LIP Challenge.

You can’t perform that action at this time.