A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
-
Updated
Oct 15, 2023 - Python
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
A Tensorflow adversarial machine learning attack toolkit to add perturbations and cause image recognition models to misclassify an image
Implementation of Papers on Adversarial Examples
ECE C147: Neural Networks & Deep Learning. Repository for "Developing Robust Networks to Defend Against Adversarial Examples". Implementing adversarial data augmentation on CNNs and RNNs.
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
In this work, we extend the FGSM method proposing multistep adversarial perturbation (MSAP) procedures to study the recommenders’ robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF.
Homework of Security and Privacy of Machine Learning (SPML Lectured by Shang-Tse Chen at NTU)
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.
Machine Learning (2019 Spring)
Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)
The first real-world adversarial attack on MTCNN face detetction system to date
This project tests multiple different machine learning algorithms that can detect adversarial attacks in multi-agent reinforcement learning settings. Baselines were used to compare performance of a proposed ensemble model. Then, using FGSM, we re-attacked the ensemble detection model with perturbed observations. Read more at the pdf titled Final…
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
Detection by Attack: Detecting Adversarial Samples by Undercover Attack
adversarial patch train by I-FGSM to attack on MTCNN face detection system
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
Implementation of adversarial training under fast-gradient sign method (FGSM), projected gradient descent (PGD) and CW using Wide-ResNet-28-10 on cifar-10. Sample code is re-usable despite changing the model or dataset.
Repository consists of pre-trained CNN model in pytorch, hitting 89% on Fashion MNIST dataset. Adversarial attack was implemented on a given model. Results are below.
Add a description, image, and links to the fgsm topic page so that developers can more easily learn about it.
To associate your repository with the fgsm topic, visit your repo's landing page and select "manage topics."