Skip to content

amazon-science/normalizer-free-robust-training

Removing Batch Normalization Boosts Adversarial Training

This is the official implementation of the Removing Batch Normalization Boosts Adversarial Training paper at ICML'22.

Previous adversarial training (AT) methods improve model robustness but significantly decrease model accuracy on clean samples. Our Normalizer-Free Adversarial Training (NoFrost) achieves high adversarial robustness almost without sacrificing clean accuracy, by removing batch normalization (BN) from AT. The intuition is that BN struggles to handle the mixture distribution of clean and adversarial images as observed by previous works.

The performance on ImageNet using ResNet50 is summarized below.

Method Clean accuracy PGD robustness
Normal training (with BN) 76.06% 0%
Adversarial training (with BN) 59.28% 13.57%
NoFrost (without normalizer) 74.06% 22.45%

Training

NoFrost training

NCCL_P2P_DISABLE=1 python PGDAT.py --gpu 0,1,2,3,4,5,6,7 --md nf_resnet50 -e 90 -b 256 --eps 8 --steps 10 --Lambda 0.5 --ddp --dist_url tcp://localhost:23456 --srp <where_to_save_the_ckpts> --dd <where_the_ImageNet_dataset_is_stored>

The path indicated by --dd should contain the train and val folders of the original ImageNet dataset.

NoFrost* training

NCCL_P2P_DISABLE=1 python CRT.py --gpu 0,1,2,3,4,5,6,7 --eps 8 --steps 10 --Lambda 0.5 --md nf_resnet50 --ddp --dist_url tcp://localhost:23456 --srp <where_to_save_the_ckpts> --drp <where_all_the_datasets_are_stored>

The path indicated by --drp should contain a folder named imagenet which stores ImageNet dataset (with two subfolders named train and val), a folder named imagenet-DeepAug-CAE and a folder named imagenet-DeepAug-EDSR which stores the images generated by DeepAugment.

Please see deepaug/readme.md on how to generate DeepAugment images. Not that this is only necessary if you want to train NoFrost*. Training NoFrost only require the original ImageNet dataset.

Pretrained models

Pretrained models are available on Google Drive

Evaluation:

Evaluate clean accuracy

python test.py --mode clean --gpu 0 --tb 1000 --md nf_resnet50 --ckpt_path <where_you_stored_the_ckpt> --drp <where_all_the_datasets_are_stored>

Evaluate adversarial robustness

python test.py --mode pgd --eps 8 --steps 20 --gpu 0 --md nf_resnet50 --ckpt_path <where_you_stored_the_ckpt> --drp <where_all_the_datasets_are_stored>

Set the attack type using --mode and the hyper-parameters using --eps and --steps.

Evaluate robustness against natural distribution shifts

python test.py --mode c --gpu 0 --tb 1000 --md nf_resnet50 --ckpt_path <where_you_stored_the_ckpt> --drp <where_all_the_datasets_are_stored>

Change the parameter c for different robustness benchmarks. For example, --mode c is for ImageNet-C and --mode r is for ImageNet-R. You will need to download those benchmark datasets on your own and store them in the path indicated by --drp.

Acknowledgement

The entire deepaug/ folder is adapted from the orignal DeepAugment repot with MIT license.

Citation

@inproceedings{wang2022removing,
  title={Removing Batch Normalization Boosts Adversarial Training},
  author={Wang, Haotao and Zhang, Aston and Zheng, Shuai and Shi, Xingjian and Li, Mu and Wang, Zhangyang},
  booktitle={International Conference on Machine Learning},
  pages={23433--23445},
  year={2022}
}

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

Official implementation of "Removing Batch Normalization Boosts Adversarial Training" (ICML'22)

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages