Skip to content
Task-agnostic Universal Black-box Attack on Computer Vision DNNs
Jupyter Notebook Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images
.gitignore
LICENSE.md
README.md
intro.png
intro_bopt.ipynb
intro_gabor.ipynb
slider.png
slider_gabor.ipynb
slider_perlin.ipynb
utils_attack.py
utils_bopt.py
utils_noise.py

README.md

Procedural Noise UAPs

This repository contains sample code and an interactive Jupyter notebook for the papers:

In this work, we show that universal adversarial perturbations can be generated with procedural noise functions without any knowledge of the target model. Procedural noise functions are fast and lightweight methods for generating textures in computer graphics, this enables low cost black-box attacks on deep convolutional networks for computer vision tasks.

We encourage you to explore our Python notebooks and make your own adversarial examples:

  1. intro_bopt: See how Bayesian optimization can find better parameters for the procedural noise functions.

  2. intro_gabor: A brief introduction to Gabor noise. slider

  3. slider_gabor, slider_perlin: Visualize and interactively play with the parameters to see how it affects model predictions. slider

See our paper for more details: "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks." Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu. CCS 2019.

Python Dependencies

Acknowledgments

Learn more about the Resilient Information Systems Security (RISS) group at Imperial College London. The main author is partially supported by Data Spartan.

Please cite these papers, where appropriate, if you use code in this repository as part of a published research project.

@inproceedings{co2019procedural,
 author = {Co, Kenneth T. and Mu\~{n}oz-Gonz\'{a}lez, Luis and de Maupeou, Sixte and Lupu, Emil C.},
 title = {Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks},
 booktitle = {Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security},
 series = {CCS '19},
 year = {2019},
 isbn = {978-1-4503-6747-9},
 location = {London, United Kingdom},
 pages = {275--289},
 numpages = {15},
 url = {http://doi.acm.org/10.1145/3319535.3345660},
 doi = {10.1145/3319535.3345660},
 acmid = {3345660},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {adversarial machine learning, bayesian optimization, black-box attacks, deep neural networks, procedural noise, universal adversarial perturbations},
}

This project is licensed under the MIT License, see the LICENSE.md file for details.

You can’t perform that action at this time.