We study a defense mechanism involving an Optical Processing Units and Direct Feedback Alignment algorithm against adversarial attacks. We show how such defense provides robustness vs white-box attacks, transfer attacks and black-box attacks. Finally we provide an ablation study to show which part of our defense mechanism is responsable in providing robustness for each kind of attack.
- A
requirements.txt
file is available at the root of this repository, specifying the required packages for all of our experiments;
- Run
./experiments
for results reproduction.
To request access to LightOn Cloud and try our photonic co-processor, please visit: https://cloud.lighton.ai/
For researchers, we also have a LightOn Cloud for Research program, please visit https://cloud.lighton.ai/lighton-research/ for more information.
If you found this code and findings useful in your research, please consider citing: