RAMP: Boosting Adversarial Robustness Against Multiple
Enyi Jiang, Gagandeep Singh
Arxiv
We present RAMP, a framework that boosts multiple-norm robustness, via alleviating the tradeoffs in robustness among multiple
We recommend first creating a conda environment using the provided environment.yml:
conda env create -f environment.yml
-
Main Result: The files
RAMP.py
andRAMP_wide_resnet.py
allow us to train ResNet-18 and WideReset models with standard choices of epsilons. To reproduce the results in the paper, one can runRAMP_scratch_cifar10.sh
in folderscripts/cifar10
. -
Varying Epsilon Values: We provide scripts of
run_ramp_diff_eps_scratch.sh
(RAMP),run_max_diff_eps_scratch.sh
(MAX), andrun_eat_diff_eps_scratch.sh
(E-AT) in folderscripts/cifar10
for running the training from scratch experiments with different choices of epsilons.
-
To get pretrained versions of ResNet-18 models with different epsilon values, one can run
pretrain_diff_eps_Lp.sh
scripts in folderscripts/cifar10
. -
It is also possible to use models from the Model Zoo of RobustBench with
--model_name=RB_{}
inserting the identifier of the classifier from the Model Zoo (these are automatically downloaded). (credits to E-AT paper) -
Main Result: To reproduce the results in the paper with different model architectures, one can run
RAMP_finetune_cifar10.sh
in folderscripts/cifar10
andRAMP_finetune_imagenet.sh
in folderscripts/imagenet
. -
Varying Epsilon Values: We provide scripts of
run_ramp_diff_eps_finetune.sh
(RAMP),run_max_diff_eps_finetune.sh
(MAX), andrun_eat_diff_eps_finetune.sh
(E-AT) in folderscripts/cifar10
for running the robust fine-tuning experiments with different choices of epsilons.
With --final_eval
our standard evaluation (with APGD-CE and APGD-T, for a total of 10 restarts of 100 steps) is run for all threat models at the end of training.
Specifying --eval_freq=k
a fast evaluation is run on test and training points every k
epochs.
To evaluate a trained model one can run eval.py
with --model_name
as above for the pretrained model or --model_name=/path/to/checkpoint/
for new or fine-tuned
classifiers. The corresponding architecture is loaded if the run has the automatically generated name. More details about the options for evaluation in eval.py
.
Parts of the code in this repo is based on