Experimental Adversarial Attack notebooks on CV models
-
Updated
Sep 14, 2020 - Jupyter Notebook
Experimental Adversarial Attack notebooks on CV models
Notebooks exploring image manipulation to trick neural networks.
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Data generation and model training notebooks for the paper: Architectural Resilience to Foreground-and-Background Adversarial Noise
Notebook to implement different approaches for Adversarial Attack using Python and PyTorch.
Code of our recently published attack FDA: Feature Disruptive Attack. Colab Notebook: https://colab.research.google.com/drive/1WhkKCrzFq5b7SNrbLUfdLVo5-WK5mLJh
Contains notebooks for the PAR tutorial at CVPR 2021.
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."