- MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks
- Connecting Lyapunov Control Theory to Adversarial Attacks
- Enhancing Adversarial Example Transferability with an Intermediate Level Attack
- advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns
- On the Robustness of Semantic Segmentation Models to Adversarial Attacks
- The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
- Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks
- AdvHat: Real-world adversarial attack on ArcFace Face ID system
- Adversarial Attacks on Neural Networks for Graph Data
- Adversarial Attack on Graph Structured Data
- Attacking Graph-based Classification via Manipulating the Graph Structure
- Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
- Adversarial learning
- On Evaluating Adversarial Robustness
- Customizing an Adversarial Example Generator with Class-Conditional GANs
- Generating Adversarial Examples with Adversarial Networks
- Defending Against Adversarial Attacks by Leveraging an Entire GAN
- Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
- Evasion Attacks against Machine Learning at Test Time
- Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
- SentiNet: Detecting Physical Attacks Against Deep Learning Systems
- Sitatapatra: Blocking the Transfer of Adversarial Samples
- Adversarial Examples Are Not Bugs, They Are Features
- Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
- Real-Time Adversarial Attacks
- Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
- Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning
- AdvFaces: Adversarial Face Synthesis
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense
- Data Poisoning against Differentially-Private Learners: Attacks and Defenses
- Data Poisoning Attack against Knowledge Graph Embedding
- Robust Audio Adversarial Example for a Physical Attack
- Adversarial Defense Framework for Graph Neural Network
- The General Black-box Attack Method for Graph Neural Networks
- Adversarial Attack and Defense on Graph Data: A Survey
- Open DNN Box by Power Side-Channel Attack
- Transferable Adversarial Attacks for Image and Video Object Detection
- Exploring Connections Between Active Learning and Model Extraction
- A framework for the extraction of Deep Neural Networks by leveraging public data
- Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
- [Explaining and harnessing adversarial examples]
- [Adversarial Edit Attacks for Tree Data]
- [Model Extraction and Active Learning]
- [High-Fidelity Extraction of Neural Network Models]
- [LASSIFIERS A GAINST A DVERSARIAL EXAMPLES]