Skip to content

[NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".

License

Notifications You must be signed in to change notification settings

jxzhangjhu/AdvFlow

 
 

Repository files navigation

AdvFlow

Hadi M. Dolatabadi, Sarah Erfani, and Christopher Leckie 2020

arXiv License: MIT

This is the official implementation of NeurIPS 2020 paper AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows. A small part of this work, the Greedy AdvFlow, has been published in ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models. A blog post explaining our approach can be found here.

Requirements

To install requirements:

pip install -r requirements.txt

Training Normalizing Flows

To train the a flow-based model, first set mode = 'pre_training', and specify all relevant variables in config.py. Once specified, run this command:

python train.py

Attack Evaluation

To perform AdvFlow black-box adversarial attack, first set the mode = 'attack' in config.py. Also, specify the dataset, target model architecture and path by setting the dataset, target_arch, and target_weight_path variables in config.py, respectively. Once specified, run:

python attack.py

for CIFAR-10, SVHN, and CelebA. For ImageNet, however, you need to run:

python attack_imagenet.py

Finally, you can run the Greedy AdvFlow by:

python attack_greedy.py

Pre-trained Models

Pre-trained flow-based models as well as some target classifiers can be found here.

Results

Fooling Adversarial Example Detectors

The primary assumption of adversarial example detectors is that the adversaries come from a different distribution than the data. Here, we attack the CIFAR-10 and SVHN classifiers defended by well-known adversarial example detectors, and show that the adversaries generated by our model can mislead them more than the similar method of NATTACK. This suggests that we have come up with adversaries that have similar distribution to the data.

Table: Area under the receiver operating characteristic curve (AUROC) and accuracy of detecting adversarial examples generated by NATTACK and AdvFlow (un. for un-trained and tr. for pre-trained NF) using LID, Mahalanobis, and Res-Flow adversarial attack detectors.

Data Metric AUROC(%) Detection Acc.(%)
Method 𝒩Attack AdvFlow (un.) AdvFlow (tr.) 𝒩Attack AdvFlow (un.) AdvFlow (tr.)
CIFAR-10 LID 78.69 84.39 57.59 72.12 77.11 55.74
Mahalanobis 97.95 99.50 66.85 95.59 97.46 62.21
Res-Flow 97.90 99.40 67.03 94.55 97.21 62.60
SVHN LID 57.70 58.92 61.11 55.60 56.43 58.21
Mahalanobis 73.17 74.67 64.72 68.20 69.46 60.88
Res-Flow 69.70 74.86 64.68 64.53 68.41 61.13

Acknowledgement

This repository is mainly built upon FrEIA, the Framework for Easily Invertible Architectures, and NATTACK. We thank the authors of these two repositories.

Citation

If you have found our code or paper beneficial to your research, please consider citing them as:

@inproceedings{dolatabadi2020advflow,
  title={AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows},
  author={Hadi Mohaghegh Dolatabadi and Sarah Erfani and Christopher Leckie},
  booktitle = {Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems ({NeurIPS})},
  year={2020}
}

About

[NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%