Skip to content

Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".

License

Notifications You must be signed in to change notification settings

Bai-YT/AdaptiveSmoothing

Repository files navigation

Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing

This repository is the official code base for the paper Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing.

We publically share one CIFAR-10 model and two CIFAR-100 models that aim to defend the $\ell_\infty$ attack. Each proposed models rely on an accurate base classifier, a robust base classifier, and an optional "mixing network". The two proposed models share the same accurate base classifier but use two different robust base models and mixing networks. The results are the following:

Model Clean Accuracy $\ell_\infty$ AutoAttacked Accuracy ($\epsilon = 8/255$)
CIFAR-10 95.23 % 68.06 %
CIFAR-100 Model 1 85.21 % 38.72 %
CIFAR-100 Model 2 80.18 % 35.15 %

These results are also verified and listed on RobustBench.

Citing our work (BibTeX)

@article{bai2023improving,
  title={Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing},
  author={Bai, Yatong and Anderson, Brendon G and Kim, Aerin and Sojoudi, Somayeh},
  journal={arXiv preprint arXiv:2301.12554},
  year={2023}
}

Running RobustBench to replicate the results

Running the RobustBench benchmark should only require pytorch, torchvision, numpy, click, and robustbench packages.

Make a directory <YOUR_MODEL_ROOT_DIR> at a desired path to store the model checkpoints. Then, download the following models:

  • Accurate base classifier: Big Transfer (BiT) ResNet-152 model finetuned on CIFAR-100 -- download.
  • Robust base classifier 1: WideResNet-70-16 model from this repo -- download and rename as cifar100_linf_edm_wrn70-16.pt.
    • This model was trained on additional images generated by a EDM diffusion model.
  • Robust base classifier 2: WideResNet-70-16 model from this repo -- download and rename as cifar100_linf_trades_wrn70-16.pt.
  • Mixing network to be coupled with robust base classifier 1 -- download.
  • Mixing network to be coupled with robust base classifier 2 -- download.

Edited on August 3, 2023:

We have added a CIFAR-10 model to our results.

  • The accurate base classifier is a Big Transfer (BiT) ResNet-152 model finetuned on CIFAR-10 -- download.
  • The robust base classifier is a WideResNet-70-16 model from this repo -- download and rename as cifar10_linf_edm_wrn70-16.pt.
  • The corresponding mixing network -- download.

Now, organize <YOUR_MODEL_ROOT_DIR> following the structure below:

<YOUR_MODEL_ROOT_DIR>
│
└───Base
│   │   cifar100_linf_edm_wrn70-16.pt
│   │   cifar100_linf_trades_wrn70-16.pt
|   |   cifar10_linf_edm_wrn70-16.pt
│   │   cifar100_bit_rn152.tar
│   │   cifar10_bit_rn152.tar
│   
└───CompModel
    │   cifar100_edm_best.pt
    │   cifar100_trades_best.pt
    │   cifar100_edm_best.pt

To benchmark existing models with RobustBench, run the following:

python run_robustbench.py --root_dir <YOUR_MODEL_ROOT_DIR> --dataset {cifar10, cifar100} --model_name {edm,trades}

Note that while the base classifiers may require additional (collected or synthesized) training data, the provided mixing networks were only trained on CIFAR training data.

Training a new model

To train a new model with the provided code, install the full environment. We require the following packages: pytorch torchvision tensorboard pytorch_warmup numpy scipy matplotlib jupyter notebook ipykernel ipywidgets tqdm click PyYAML.

To train, run the following:

python run.py --training --config configs/xxx.yaml

To evaluate, run the following:

python run.py --eval --config configs/xxx.yaml

About

Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published