Skip to content
Breaking High-level representation Guided Denoiser (Liao et al. 2018)
Branch: master
Clone or download
Pull request Compare This branch is 15 commits ahead of lfz:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Attackset
Exps/sample
GD_train
PD_train
nips_deploy
toolkit
utils
.gitignore
README.md
hgd.jpg
prepare_data.ipynb
sample_dev_dataset.csv

README.md

Guided-Denoise

The code in this repository demonstrates that Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser (Liao et al. 2018) is ineffective in the white-box threat model.

With an L-infinity perturbation of 4/255, we generate targeted adversarial examples with 100% success rate.

See our note for more context and details: https://arxiv.org/abs/1804.03286

Pretty pictures

Obligatory picture of sample of adversarial examples against this defense.

Citation

@unpublished{cvpr2018breaks,
  author = {Anish Athalye and Nicholas Carlini},
  title = {On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses},
  year = {2018},
  url = {https://arxiv.org/abs/1804.03286},
}

robustml evaluation

Run with:

cd nips_deploy
python robustml_attack.py --imagenet-path <path>

Credits

Thanks to Dimitris Tsipras for writing the robustml model wrapper.

You can’t perform that action at this time.