Label Smoothing and Adversarial Robustness
-
Updated
Nov 4, 2020 - Jupyter Notebook
Label Smoothing and Adversarial Robustness
ECE C147: Neural Networks & Deep Learning. Repository for "Developing Robust Networks to Defend Against Adversarial Examples". Implementing adversarial data augmentation on CNNs and RNNs.
This repository contains the implementation of two adversarial example attack methods FGSM, IFGSM and one Input Transformation defense mechanism against all attacks using Imagenet dataset.
FGSM(Fast Gradient Sign Method)
Experimental Adversarial Attack notebooks on CV models
using adversarial attacks to confuse deep-chicken-terminator 🛡️ 🐔
A Comprehensive Study on Cloud-Based Model Interpretability, Accountability, and Privacy in Machine Learning with Resilience to Adversarial Attacks
Adversarially-robust Image Classifier
Implementation of FGSM (Fast Gradient Sign Method) attack on fine-tuned MobileNet architecture trained for flood detection in images.
Fast Gradient Sign Method Adversarial Attack on Digit Recognition Model
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
Machine Learning (2019 Spring)
Adversarial attacks to SRNet
A Tensorflow adversarial machine learning attack toolkit to add perturbations and cause image recognition models to misclassify an image
Simple examples done in MxNet
Add a description, image, and links to the fgsm topic page so that developers can more easily learn about it.
To associate your repository with the fgsm topic, visit your repo's landing page and select "manage topics."