Skip to content

josh200501/Adversarial-Attack-Algorithms

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 

Repository files navigation

Summary of papers in Adversarial Settings

A comprehensive summary in this direction!


Background of Deep Learning Security and applications of adversarial attack

✅ 000.Machine Learning in Adversarial Settings-PPT

✅ 0603.Can Machine Learning Be Secure

✅ 1100.Adversarial Machine Learning

✅ 1610.Accessorize to a Crime : Real and Stealthy Attacks on State-of-the-Art Face Recognition

✅ 1707.NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

✅ 1710.Standard detectors aren't (currently) fooled by physical adversarial stop signs


Existence of adversarial examples

✅ 1500.Fundamental limits on adversarial robustness

✅ 1503.Explaining and Harnessing Adversarial Examples

✅ 1608.A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

✅ 1608.Robustness of classifiers:from adversarial to random noise

✅ 1705.Analysis of universal adversarial perturbations

✅ 1801.High Dimensional Spaces, Deep Learning and Adversarial Examples


Attack Algorithms

✅ 1400.Evasion attacks against machine learning at test time

✅ 1402.Intriguing properties of neural networks

✅ 1503.Explaining and Harnessing Adversarial Examples

✅ 1504.Deep neural networks are easily fooled High confidence predictions for unrecognizable images

✅ 1507.Distributional Smoothing with Virtual Adversarial Training

✅ 1507.Manitest-Are classifiers really invariant

✅ 1510.Exploring the space of adversarial images

✅ 1510.The Limitations of Deep Learning in Adversarial Settings

✅ 1601.Adversarial Perturbations Against Deep Neural Networks for Malware Classification

✅ 1602.Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

✅ 1605.Transferability in Machine Learning from Phenomena to Black-Box Attacks using Adversarial Samples

✅ 1607.DeepFool_ A Simple and Accurate Method to Fool Deep Neural Networks

✅ 1608.Stealing Machine Learning Models via Prediction APIs

✅ 1610.DeepDGA-Adversarially Tuned Domain Generation and Detection

✅ 1611.Delving into Transferable Adversarial Examples and Black-box Attacks

✅ 1612.Simple Black-Box Adversarial Perturbations for Deep Networks

✅ 1700.Concrete Problems for Autonomous Vehicle Safety_ Advantages of Bayesian Deep Learning

✅ 1701.Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

✅ 1702.Adversarial Attacks on Neural Network Policies

✅ 1702.Adversarial examples for generative models

✅ 1702.Adversarial examples in the physical world

✅ 1702.Adversarial machine learning at scale

✅ 1702.Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN

✅ 1702.Towards Deep Learning Models Resistant to Adversarial Attacks

✅ 1703.Adversarial Transformation Networks: Learning to Generate Adversarial Examples

✅ 1703.Generative Poisoning Attack Method Against Neural Networks

✅ 1703.Notes on Adversarial Examples

✅ 1703.Tactics of Adversarial Attack on Deep Reinforcement Learning Agents

✅ 1703.Towards Evaluating the Robustness of Neural Networks

✅ 1703.Universal adversarial perturbations

✅ 1705.Analysis of universal adversarial perturbations

✅ 1705.Ensemble Adversarial Training Attacks and Defenses

✅ 1705.Generative Adversarial Trainer Defense to Adversarial Perturbations with GAN

✅ 1705.Stabilizing Adversarial Nets With Prediction Methods

✅ 1707.APE-GAN-- Adversarial Perturbation Elimination with GAN

✅ 1707.Evading Machine Learning Malware Detection

✅ 1707.Fast Feature Fool-A data independent approach to universal adversarial perturbations

✅ 1707.Robust Physical-World Attacks on Machine Learning Models

✅ 1707.Robust Physical-World Attacks on Deep Learning Visual Classification

✅ 1707.Synthesizing Robust Adversarial Examples

✅ 1708.Machine Learning as an Adversarial Service Learning Black-Box Adversarial Examples

✅ 1708.Proof of Work Without All the Work

✅ 1709.Can you fool AI with adversarial examples on a visual Turing test

✅ 1709.EAD Elastic Net Attacks to Deep Neural Networks via Adversarial Examples

✅ 1709.Ground-Truth Adversarial Examples

✅ 1710.One pixel attack for fooling deep neural networks

✅ 1711.Security Risks in Deep Learning Implementations

✅ 1712.Adversarial Patch

✅ 1712.Robust Deep Reinforcement Learning with Adversarial Attacks

✅ 1712.Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

✅ 1712.Where Classification Fails, Interpretation Rises

✅ 1801.Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning

✅ 1801.LaVAN-Localized and Visible Adversarial Noise

✅ 1803.Improving Transferability of Adversarial Examples with Input Diversity


Defence Strategies

✅ 1603.Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

✅ 1604.Improving the robustness of deep neural networks via stability training

✅ 1605.Adversarial Training Methods for Semi-Supervised Text Classification

✅ 1607.Defensive Distillation is Not Robust to Adversarial Examples

✅ 1608.A study of the effect of JPG compression on adversarial images

✅ 1610.Adversary Resistant Deep Neural Networks with an Application to Malware Detection

✅ 1703.Biologically inspired protection of deep networks from adversarial attacks

✅ 1704.Enhancing Robustness of Machine Learning Systems via Data Transformations

✅ 1704.Feature Squeezing Detecting Adversarial Examples in Deep Neural Networks

✅ 1705.Detecting Adversarial Examples in Deep Neural Networks

✅ 1705.Extending Defensive Distillation

✅ 1705.Feature Squeezing Mitigates and Detects Carlini Wagner Adversarial Examples

✅ 1705.Generative Adversarial Trainer Defense to Adversarial Perturbations with GAN

✅ 1705.MagNet-a Two-Pronged Defense against Adversarial Examples

✅ 1706.Adversarial Example Defenses :Ensembles of Weak Defenses are not Strong

✅ 1707.AE-GAN adversarial eliminating with GAN

✅ 1707.APE-GAN-- Adversarial Perturbation Elimination with GAN

✅ 1709.Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

✅ 1711.Mitigating adversarial effects through randomization

✅ 1805.Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models


Related survey researches

✅ 1606.Concrete Problems in AI Safety

✅ 1610.SoK Towards the Science of Security and Privacy in Machine Learning

✅ 1611.Towards the Science of Security and Privacy in Machine Learning

✅ 1707.A Survey on Resilient Machine Learning

✅ 1712.Adversarial Examples-Attacks and Defenses for Deep Learning

✅ 1801.Threat of Adversarial Attacks on Deep Learning in Computer Vision-A Survey

✅ 1802.Adversarial Risk and the Dangers of Evaluating Against Weak Attacks


Detection papers

✅ 1612.Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics

✅ 1702.On Detecting Adversarial Perturbations

✅ 1702.On the (Statistical) Detection of Adversarial Examples

✅ 1703.Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

✅ 1703.Detecting Adversarial Samples from Artifacts

✅ 1704.SafetyNet-Detecting and Rejecting Adversarial Examples Robustly

✅ 1705.Adversarial Examples Are Not Easily Detected-Bypassing Ten Detection Methods

✅ 1712.Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

✅ 1803.Detecting Adversarial Examples via Neural Fingerprinting


Adversarial attack in physical world

✅ 1600.Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

✅ 1702.Adversarial examples in the physical world

✅ 1707.Robust Physical-World Attacks on Machine Learning Models

✅ 1707.NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

✅ 1707.Synthesizing Robust Adversarial Examples

✅ 1707.Robust Physical-World Attacks on Deep Learning Visual Classification

✅ 1710.Standard detectors aren’t (currently) fooled by physical adversarial stop signs

✅ 1801.Audio Adversarial Examples: Targeted Attacks on Speech-to-Text


📖 (Updating)Adversarial Attack Reasearch in Security Areas

✅ 1606.Adversarial Perturbations Against Deep Neural Networks for Malware Classificationc

✅ 1705.Black-Box Attacks against RNN based Malware Detection Algorithms

✅ 1706.Automated Poisoning Attacks and Defenses in Malware Detection Systems:An Adversarial Machine Learning Approach

✅ 1707.Evading Machine Learning Malware Detection

✅ 1710.Malware Detection by Eating a Whole EXE

✅ 1712.Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models

✅ 1801.Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning

✅ 1803.Adversarial Malware Binaries_ Evading Deep Learning for Malware Detection in Executables

✅ 1803.Malytics_ A Malware Detection Scheme

✅ 1804.Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers

✅ 1805.Adversarial Attacks on Neural Networks for Graph Data

✅ 1810.Exploring Adversarial Examples in Malware Detection

✅ 1811.rsarial Examples for Malware Detection

✅ 1812.earning under Attack_ Vulnerability Exploitation and Security Measures

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%