Skip to content

A new defense mechanism against adversarial attacks through Optical Processing Units and synthetic gradients.

License

Notifications You must be signed in to change notification settings

LaplaceKorea/adversarial-robustness-by-design

 
 

Repository files navigation

Adversarial Robustness by Design throughAnalog Computing and Synthetic Gradients

GitHub license Twitter

We study a defense mechanism involving an Optical Processing Units and Direct Feedback Alignment algorithm against adversarial attacks. We show how such defense provides robustness vs white-box attacks, transfer attacks and black-box attacks. Finally we provide an ablation study to show which part of our defense mechanism is responsable in providing robustness for each kind of attack.

Requirements

  • A requirements.txt file is available at the root of this repository, specifying the required packages for all of our experiments;

Reproducing our results

  • Run ./experiments for results reproduction.

Access to Optical Processing Units

To request access to LightOn Cloud and try our photonic co-processor, please visit: https://cloud.lighton.ai/

For researchers, we also have a LightOn Cloud for Research program, please visit https://cloud.lighton.ai/lighton-research/ for more information.

Citation

If you found this code and findings useful in your research, please consider citing:

About

A new defense mechanism against adversarial attacks through Optical Processing Units and synthetic gradients.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.7%
  • Shell 8.3%