Skip to content
Code for the paper "Adversarial Examples Are a Natural Consequence of Test Error in Noise"
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
CONTRIBUTING
LICENSE
README.md
__init__.py
cifar10_input.py
cifar_corruption.csv
datasets.py
imagenet_corruption.csv
inception_v3_model.py
madry_cifar_model.py
model.py
plots.py
requirements.txt
utils.py

README.md

Adversarial Examples Are a Natural Consequence of Test Error in Noise

This directory contains code for generating the graphs from the paper relating the performance of a model on Gaussian noise to the distance to the nearest error.

The CIFAR version of this pipeline is meant to be used with the models trained by Madry et al. for their paper "Towards Deep Learning Models Resistant to Adversarial Attacks". Those models can be found by running the code available at https://github.com/MadryLab/cifar10_challenge/blob/master/fetch_model.py.

Example usage:

python -m adv_corr_robust.plots --batch_size=1 --num_batches=1 --model=cifar \
  --model_file=/path/to/madry_cifar_model/checkpoint-70000 \
  --save_dir=/path/to/save_dir --data_dir=/path/to/cifar_data
You can’t perform that action at this time.