Find file History

README.md

Adversarial Patch

Code from Adversarial Patch (Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer)

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

We recommend using Colab to host this notebook. You can run it at the following URL:

https://colab.research.google.com/drive/1hSq_D5s9FWs2MH6BNyf6cTRBXEGcYO_D