Skip to content

Code to produce the results of the ArXiv preprint "Adversarial Defense of Image Classification Using a Variational Auto-Encoder"

Notifications You must be signed in to change notification settings

Roy-YL/VAE-Adversarial-Defense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

VAE-Adversarial-Defense

Code to produce the results of ArXiv preprint "Adversarial Defense of Image Classification Using a Variational Auto-Encoder".

Requirements

  • Python 3.6
  • Tensorflow and Keras
  • Cleverhans
  • Sklearn
  • Scipy, Imageio, matplotlib

MNIST and CIFAR-10

The MNIST dataset should be downloaded by the user and stored under data directory in mat format. MNIST

The CIFAR-10 dataset can be downloaded by running provided script. CIFAR-10

To train the classifiers, run

python train_classifier.py

To train the VAEs, run

python train_vae.py

To evaluate the attacks and defenses, run

python evaluate_mnist.py

and

python evaluate_cifar.py

NIPS 2017 Defense Against Adversarial Attacks Dataset

Download the 1000 image dataset and pretrained Inception-V3 model checkpoint from the Kaggle competition.

Store the images in a directory named images, the Inception-V3 model checkpoint in a directory named inception-v3.

To train the VAE models on the images, run

python train_vae.py

To perform FGSM and I_FGSM attacks on the images, run

python attack.py

The attacked images will be stored in directories with names such as fgsm_images_0.005 where 0.005 indicates the attack hyperparameter epsilon.

To evaluate the defense on the attacked images, run

python evaluate.py

The results will be saved into a csv file.

About

Code to produce the results of the ArXiv preprint "Adversarial Defense of Image Classification Using a Variational Auto-Encoder"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages