Skip to content

nmndeep/robust-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models

Francesco Croce, Naman D Singh, Matthias Hein

University of Tübingen

Paper

Abstract While a large amount of work has focused on designing adversarial attacks against image classifiers, only a few methods exist to attack semantic segmentation models. We show that attacking segmentation models presents task-specific challenges, for which we propose novel solutions. Our final evaluation protocol outperforms existing methods, and shows that those can overestimate the robustness of the models. Additionally, so far adversarial training, the most successful way for obtaining robust image classifiers, could not be successfully applied to semantic segmentation. We argue that this is because the task to be learned is more challenging, and requires significantly higher computational effort than for image classification. As a remedy, we show that by taking advantage of recent advances in robust ImageNet classifiers, one can train adversarially robust segmentation models at limited computational cost by fine-tuning robust backbones.


Experimental setup and code

Main dependencies: PyTorch-2.0.0, torchvision-0.15.0, timm-0.6.2, AutoAttack

Segmentation Ensemble Attack (SEA) evaluation

Run runner_infer.sh with the models config (.yaml) file from configs folder.

This computes the final adversarial robustness for the particular dataset and model passed as arguments within the .yaml file.


Training

SLURM type setup in runner.sh , run with location_of_config file and num_of_gpu as arguments.

For non-SLURM directly run train.py with location_of_config file and num_of_gpu as arguments.

  • For UperNet with ConvNext (both Tiny and Small versions) backbone for ADE20K

    • Clean-training: config-file: ade20k_convnext_cvst.yaml set BACKBONE with CONVNEXT-S_CVST for Small model.
    • Adversarial-training: config-file: ade20k_convnext_rob_cvst.yaml set BACKBONE with CONVNEXT-S_CVST for Small model.
  • For UperNet with ConvNext (both Tiny and Small versions) backbone for PASCALVOC

    • Clean-training: config-file: pascalvoc_convnext_cvst.yaml set BACKBONE with CONVNEXT-S_CVST for Small model.
    • Adversarial-training: config-file: pascalvoc_convnext_rob_cvst.yaml set BACKBONE with CONVNEXT-S_CVST for Small model.
  • For SegMenter with Vit-S backbone for ADE20K dataset

    • Adversarial-training: config-file: ade20k_segmenter_clean.yaml, set ADVERSARIAL to FALSE for clean training.

Robust-Segmentation models

We make our robust models publically available. mIoU is reported for clean evaluation and with SEA evaluation (Adv.) at two perturbation strengths.

model-name Dataset Clean Adv.(4/255) Adv.(8/255) checkpoint
UperNet-ConvNext-T_CVST PASCAL-VOC 75.2% 63.8% 37.0% Link
UperNet-ConvNext-S_CVST PASCAL-VOC 76.6% 66.2% 38.0% Link
UperNet-ConvNext-T_CVST ADE20K 31.7% 18.6% 6.70% Link
UperNet-ConvNext-S_CVST ADE20K 32.1% 19.2% 7.20% Link
Segmenter-ViT-S ADE20K 28.7% 16.1% 7.10% Link

Note: the models are trained including the background class for both VOC and ADE20K.

Robust pre-trained backbone models were taken from Revisiting-AT* github repository.

For UperNet we always use the ConvNext backbone with Convolution Stem (CvSt).


Required citations

If you use our code/models consider citing us with the follwong BibTex entry:

@article{croce2023robust,
 title={Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models}, 
 author={Francesco Croce and Naman D Singh and Matthias Hein},
 year={2023},
 journal={arXiv:2306.12941}}

Also consider citing SegPGD if you use SEA attack, as their loss function makes up a part of SEA evaluation.

Acknowledgements

The code in this repo is partially based on the following publically available codebases.

About

Robust Semantic Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published