Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Learn2Perturb: a noise injection method for adversarial robustness

(Pytorch 1.0)

image info

This repository contains an implementation corresponding to our CVPR 2020 paper: "Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness". A brief presentation of our work is available at this youtube link.

If you find our work useful, please cite it as follows:

@inproceedings{jeddi2020learn2perturb,
  title={Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness},
  author={Jeddi, Ahmadreza and Shafiee, Mohammad Javad and Karg, Michelle and Scharfenberger, Christian and Wong, Alexander},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1241-1250},
  year={2020}
}

This repository includes PyTorch implementation of:

  • Adversarial attacks
    • FGSM
    • PGD
    • EOT (Expectation Over Transformations [1])
  • Baseline models used in experiments
  • Learn2Perturb Modules

Pytorch implementations for other adversarial attacks used in this work: C&W and few-pixel attack

References

  • [1] Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. Syn- thesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.

About

No description, website, or topics provided.

Resources

License

Releases

No releases published

Packages

No packages published
You can’t perform that action at this time.