This repository contains the code for ICML 2023 paper "Improving adversarial robustness by putting more regularizations on less robust samples" by Dongyoon Yang, Insung Kong and Yongdai Kim.
We set regularization
python main.py --loss arow --dataset cifar10 --swa --model resnet18 --lamb 7 --ls 0.2
Note that we set perturb_loss to ce for CIFAR-100 due to training stability.
python main.py --loss arow --dataset cifar100 --swa --model resnet18 --lamb 7 --ls 0.2 --perturb_loss ce
The trained models can be evaluated by running eval.py which contains the standard accuracy and robust accuracies against PGD and AutoAttack.
python eval.py --datadir {data_dir} --model_dir {model_dir} --swa --model resnet18 --attack_method autoattack
@inproceedings{
dongyoon2023improving,
title={Improving adversarial robustness by putting more regularizations on less robust samples},
author={Dongyoon Yang, Insung Kong and Yongdai Kim},
booktitle={International Conference on Machine Learning},
year={2023},
url={https://arxiv.org/abs/2206.03353}
}