A list of recent papers about adversarial learning
-
Updated
May 23, 2024
A list of recent papers about adversarial learning
A novel physical adversarial attack tackling the Digital-to-Physical Visual Inconsistency problem.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Adversary Emulation Framework
Adversarial attacks on LLMs, for influencing outputs of hidden layer linear probes and steering generations.
Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convol…
Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
A classical or convolutional neural network model with adversarial defense protection
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
A reading list for large models safety, security, and privacy.
The Security Automation Toolkit
Generate adversarial patches against YOLOv5 🚀
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
WACV 2024 Papers: Discover cutting-edge research from WACV 2024, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support visual intelligence development!
Learning-Based Atk: a black box adversarial attack method for TSF.
A Python API that facilitates training, creating, and transferring attacks with quantized DNNs
Attack to induce LLMs within hallucinations
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
A suite for hunting suspicious targets, expose domains and phishing discovery
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."