Skip to content

YijiangPang/PGD-attack-for-randomized-smoothing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PGD-attack-for-randomized-smoothing

Attacking randomized smoothing aims to find the perturbation that fools the following noising and voting operations of randomized smoothing most.

Implementations including L2 and Linf are based on the idea of Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.

This code is based on these resources:

  1. smoothing-adversarial
  2. torchattacks

Note: Applying EOT startegy for PGD attack when inference involving randomness, but why?

It becomes a two-stage stochastic programming problem, and the solution is Sample Average Approximation (SAA).

Requirements

  • torchattacks

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages