Skip to content

Latest commit

 

History

History
108 lines (81 loc) · 6.89 KB

README.md

File metadata and controls

108 lines (81 loc) · 6.89 KB

Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling

This is the code repository of the following paper for end-to-end robust adversarial training of neural networks with sparse connectivity.

"Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling"
Ozan Özdenizci, Robert Legenstein
International Conference on Machine Learning (ICML), 2021.

The repository supports sparse training of models with the robust training objectives explored in the paper, as well as saved model weights of the adversarially trained sparse networks that are presented.

Setup

You will need TensorFlow 2 to run this code. You can simply start by executing:

pip install -r requirements.txt

to install all dependencies and use the repository.

Usage

You can use run_connectivity_sampling.py to adversarially train sparse networks from scratch. Brief description of possible arguments are:

  • --data: "cifar10", "cifar100", "svhn"
  • --model: "vgg16", "resnet18", "resnet34", "resnet50", "wrn28_2", "wrn28_4", "wrn28_10", "wrn34_10"
  • --objective: "at" (Standard AT), "mat" (Mixed-batch AT), trades", "mart", "rst" (intended for CIFAR-10)
  • --sparse_train: enable end-to-end sparse training
  • --connectivity: sparse connectivity ratio (used when --sparse_train is enabled)

Remarks:

  • For the --data "svhn" option, you will need to create the directory datasets/SVHN/ and place the SVHN dataset's train and test .mat files there.
  • We consider usage of robust self-training (RST) --objective "rst" based on the TRADES loss. To be able to use RST for CIFAR-10 as described in this repository, you need to place the pseudo-labeled TinyImages file at datasets/tinyimages/ti_500K_pseudo_labeled.pickle.

End-to-end robust training for sparse networks

The following sample scripts can be used to adversarially train sparse networks from scratch, and also perform white box robustness evaluations using PGD attacks via Foolbox.

  • robust_sparse_train_standardAT.sh: Standard adversarial training for a sparse ResNet-50 on CIFAR-10.
  • robust_sparse_train_TRADES.sh Robust training with TRADES for a sparse VGG-16 on CIFAR-100.

Saved model weights

We share our adversarially trained sparse models at 90% and 99% sparsity for CIFAR-10, CIFAR-100 and SVHN datasets that are presented in the paper. Different evaluations may naturally result in slight differences in the numbers presented in the paper.

Sparse networks with TRADES robust training objective

Sparse networks with Standard AT for CIFAR-10

These sparse models trained with standard AT on CIFAR-10 (without additional pseudo-labeled images) that correspond to our models presented in Figure 1 and Table 4 of the paper.

An example on how to evaluate saved model weights

Originally we store the learned model weights in pickle dictionaries, however to enable benchmark evaluations on Foolbox and AutoAttack we convert and load these saved dictionary of weights into equivalent Keras models for compatibility.

Consider the last pickle file above that corresponds to the ResNet-50 model weights at 99% sparsity trained via Standard AT on CIFAR-10. Place this file such that the following directory can be accessed: results/cifar10/resnet50/sparse1_at_best_weights.pickle. You can simply use run_foolbox_eval.py to load these network weights into Keras models and evaluate robustness against PGD50 attacks as follows:

python run_foolbox_eval.py --data "cifar10" --n_classes 10 --model "resnet50" --objective "at" --sparse_train --connectivity 0.01 --pgd_iters 50 --pgd_restarts 10

Reference

If you use this code or models in your research and find it helpful, please cite the following paper:

@inproceedings{ozdenizci2021icml,
  title={Training adversarially robust sparse networks via Bayesian connectivity sampling},
  author={Ozan \"{O}zdenizci and Robert Legenstein},
  booktitle={International Conference on Machine Learning},
  pages={8314--8324},
  year={2021},
  organization={PMLR}
}

Acknowledgments

Authors of this work are affiliated with Graz University of Technology, Institute of Theoretical Computer Science, and Silicon Austria Labs, TU Graz - SAL Dependable Embedded Systems Lab, Graz, Austria. This work has been supported by the "University SAL Labs" initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems. This work is also partially supported by the Austrian Science Fund (FWF) within the ERA-NET CHIST-ERA programme (project SMALL, project number I 4670-N).

Parts of this code repository is based on the following works by the machine learning community.