Skip to content
Model-independent universal black-box attack against computer vision DCNs.
Branch: master
Clone or download
Latest commit 6503a75 Jun 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images Rename files Apr 25, 2019
.gitignore Add gitignore Feb 1, 2019
LICENSE.md Initial commit Nov 16, 2018
README.md Update readme Jun 20, 2019
intro.png Update intro.png May 1, 2019
intro_bopt.ipynb
intro_gabor.ipynb Add sides to Gabor random May 1, 2019
slider.png Add save plot to pdf Dec 22, 2018
slider_gabor.ipynb Clear notebook output Jun 17, 2019
slider_perlin.ipynb Update intro.png May 1, 2019
utils_attack.py Fix comment Apr 25, 2019
utils_bopt.py Fix typo May 14, 2019
utils_noise.py Add sides to Gabor random May 1, 2019

README.md

Procedural Adversarial Perturbations

This repository contains sample code and an interactive Jupyter notebook for the papers "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks" and "Sensitivity of Deep Convolutional Networks to Gabor Noise".

In this work, we show that universal adversarial perturbations can be generated with procedural noise functions without any knowledge of the target model. Procedural noise functions are fast and lightweight methods for generating textures in computer graphics. Our results demonstrate the instability of existing deep convolutional networks on computer vision tasks to these inexpensive patterns.

We encourage you to explore our Python notebooks and make your own adversarial examples:

  1. intro_bopt: See how Bayesian optimization can find better parameters for the procedural noise functions.

  2. intro_gabor: A brief introduction to Gabor noise. slider

  3. slider_gabor, slider_perlin: Visualize and interactively play with the parameters to see how it affects model predictions. slider

See our paper for more details: "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks." Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu. arXiv 2019.

Python Dependencies

Acknowledgments

Learn more about the Resilient Information Systems Security (RISS) group at Imperial College London. The main author is partially supported by Data Spartan. Data Spartan is not affiliated with the university.

Please cite these papers, where appropriate, if you use code in this repository as part of a published research project.

@article{co2019procedural,
  title={Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks},
  author={Co, Kenneth T and Mu{\~n}oz-Gonz{\'a}lez, Luis and de Maupeou, Sixte and Lupu, Emil C},
  journal={arXiv preprint arXiv:1810.00470},
  year={2019}
}

@inproceedings{co2019sensitivity,
  title={Sensitivity of Deep Convolutional Networks to Gabor Noise},
  author={Co, Kenneth T and Mu{\~n}oz-Gonz{\'a}lez, Luis and Lupu, Emil C},
  booktitle={ICML 2019 Workshop on Identifying and Understanding Deep Learning Phenomena},
  year={2019}
}

This project is licensed under the MIT License, see the LICENSE.md file for details.

You can’t perform that action at this time.