Skip to content
Deflecting Adversarial Attacks with Pixel Deflection
Branch: master
Clone or download
Pull request Compare This branch is 2 commits ahead, 1 commit behind iamaaditya:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images
maps
originals
README.md
demo.ipynb
imagenet_labels.json
main.py
methods.py
utils.py

README.md

Deflecting Adversarial Attacks with Pixel Deflection

The code in this repository demonstrates that Deflecting Adversarial Attacks with Pixel Deflection (Prakash et al. 2018) is ineffective in the white-box threat model.

With an L-infinity perturbation of 4/255, we generate targeted adversarial examples with 97% success rate, and can reduce classifier accuracy to 0%.

See our note for more context and details.

Pretty pictures

Obligatory picture of sample of adversarial examples against this defense.

Citation

@unpublished{cvpr2018breaks,
  author = {},
  title = {},
  year = {2018},
url = {https://arxiv.org/abs/TODO},
}
You can’t perform that action at this time.