Skip to content

rajeevsahay/fgs_dae_defense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 

Repository files navigation

fgs_dae_defense

The code presented here supplements the material presented in the IEEE Signal Processing Letter: A Compulationally Efficient Method for Defending Adversarial Deep Learning Attacks.

The code in this repository presents a defense against the Fast Gradient Sign adversarial attack using denoising autoencoders. This defense is demonstrated on both the MNIST Digit Dataset and the Fashion MNIST Dataset. These datasets will be automatically downloaded the first time the code is run.

To run this code, the following packages must be installed:

Cleverhans - v. 2.1.0 TensorFlow - v. 2.2.4 Keras - v. 2.2.4 Numpy - v. 1.15.2

The models used to generate the results presented in our IEEE Signal Processing Letter are present in each directory under '/digits2/classifiers', '/digits2/daes', '/fashion2/classifiers', and '/fashion2/daes'.

Running the code:

After cloning the repository and installing the above packages, our results can be generated by running 'defense-demo-sw.py' and 'defense-demo-bb.py' to obtain results for the semi-white box and black box threat models, respectively, in each of the dataset directories.

For reference, we have included the code used to generate each of our models. If you wish to generate your own models, edit the parameters as desired within 'attacker-defender-models.py', 'dae_generator.py', 'classifier_generator.py', and 'model_generator.py' and then run each script in the foregoing order.