This project aims to reproduce the paper Spatially Transformed Adversarial Examples (Xiao, C., Zhu, J., Li, B., He, W., Liu, M., & Song, D.), where they introduce a new method for attacking deep neural networks so that they misclassify adversarial examples.
Our blog can be found here, or a PDF version can be seen in Blog_reproduction_stAdv.pdf
.
The goal was to reproduce the table 1 of the original paper and figure 2, which can be seen below.
To install the required packages, run the following command:
pip install -r requirements.txt
All of our code for implementing stAdv is located in the Jupyter Notebook stAdv.ipynb
. To run the notebook, you need to have Jupyter Notebook installed.
Figures
folder contains the figures used in our blog
SGD_models
folder contains the notebooks written for training the models A, B, and C from scratch.
adv_tests
folder contains the adversarial test sets of model A, B, and C. They contain the 10.000 adversarial images with random targets used to evaluate the attack success rates.