Skip to content
/ HeterAug Public

Exploring the Robustness of Human Parsers Toward Common Corruptions

License

Notifications You must be signed in to change notification settings

31sy/HeterAug

Repository files navigation

HeterAug

License: MIT

This respository is the PyTorch implementation of the TIP 2023 paper named "Exploring the Robustness of Human Parsers Toward Common Corruptions" .

This paper is mainly focused on improving the model robustness under commonly corrupted conditions.

Features:

  • Three corruption robustness benchmarks, LIP-C, ATR-C and Pascal-Person-Part-C.
  • Pre-trained human parsers on three popular single person human parsing datasets.
  • Training and inference code.

Requirements

Python >= 3.6, PyTorch >= 1.0

The detail environment can be referred to the HeterAug.yaml file.

Dataset Preparation

Please download the LIP dataset following the below structure.

datasets/LIP
|--- train_images # 30462 training single person images
|--- val_images # 10000 validation single person images
|--- train_segmentations # 30462 training annotations
|--- val_segmentations # 10000 validation annotations
|--- train_id.txt # training image list
|--- val_id.txt # validation image list

The common corrupted images can be download from the link LIP-C (提取码: av2n)

datasets/LIP-C
|--- blurs 
     |--- defocus_blur
     |--- gaussian_blur
     |--- glass_blur
     |--- motion_blur
     |--- val_segmentations
     |--- val_id.txt 
|--- digitals 
     |--- brightness
     |--- contrast
     |--- saturate
     |--- jpeg_compression
     |--- val_segmentations
     |--- val_id.txt 
|--- noises
     |--- gaussian_noise
     |--- shot_noise
     |--- impulse_noise
     |--- speckle_noise
     |--- val_segmentations
     |--- val_id.txt 
|--- weathers
     |--- snow
     |--- fog
     |--- spatter
     |--- frost
     |--- val_segmentations
     |--- val_id.txt 
|--- val_segmentations # 10000 validation annotations
|--- val_id.txt # validation image list

Training

CUDA_VISIBLE_DEVICES=0,1 python -u train_augpolicy_mixed_noisenet_epsilon.py --batch-size 14 --gpu 0,1 \
                        --data-dir ./datasets/LIP --noisenet-prob 0.25 --log-dir 'log/LIP_heteraug' 

By default, the trained model will be saved in ./log/LIP_heteraug directory. Please read the arguments for more details. The pre-trained resnet-101 model can be downloaded from the link resnet101-imagenet 提取码:f42r).

Evaluation on the clean data

python evaluate.py --model-restore [CHECKPOINT_PATH] --data-dir ./datasets/LIP

CHECKPOINT_PATH should be the path of trained model. If you want to testing with flipping, you should add --flip.

Evaluation on the corrupted data

python evaluate_c.py --model-restore [CHECKPOINT_PATH] --data-dir ./datasets/LIP-C/blurs/ --severity-level 5 --corruption_type 'glass_blur' 2>&1 | tee ./'SCHP_glass_blur.log'

The pre-trained models can be downloaded from the link pre-trained models (提取码:im5i )

The robustness benchmark construction.

You can use the code imagecorruption to generate the corrupted validation images.

There are 16 different types of image corruptions. Each image corruption employs 5 severity levels. All corruption types can be categorized into four groups, i.e., blur (defocus, gaussian, motion, glass), noise (gaussian, impulse, shot, speckle), digital (brightness, contrast, saturate, JPEG compression), and weather (fog, frost, snow, spatter).

Citation

Please cite our work if you find this repo useful in your research.

@article{zhang2023heter,
  author={Zhang, Sanyi and Cao, Xiaochun and Wang, Rui and Qi, Guo-Jun and Zhou, Jie},
  journal={IEEE Transactions on Image Processing}, 
  title={Exploring the Robustness of Human Parsers Toward Common Corruptions}, 
  year={2023},
  volume={32},
  number={},
  pages={5394-5407},
  doi={10.1109/TIP.2023.3313493}}

Our code is implemented on the Self correction human parsing model, please refer to the code link: SCHP.

About

Exploring the Robustness of Human Parsers Toward Common Corruptions

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published