Adversary Emulation Framework
-
Updated
Jun 3, 2024 - Go
Adversary Emulation Framework
The Security Automation Toolkit
A reading list for large models safety, security, and privacy.
Official implementation of Black-box Universal Adversarial Attacks with Bayesian Optimization.
RAID is the largest and most challenging benchmark for machine-generated text detectors. (ACL 2024)
PAL: Proxy-Guided Black-Box Attack on Large Language Models
a collection of small machine learning projects
This project evaluates the robustness of image classification models against adversarial attacks using two key metrics: Adversarial Distance and CLEVER. The study employs variants of the WideResNet model, including a standard and a corruption-trained robust model, trained on the CIFAR-10 dataset. Key insights reveal that the CLEVER Score serves as
FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A list of recent papers about adversarial learning
Generate adversarial patches against YOLOv5 🚀
PyTorch implementation of adversarial attacks [torchattacks].
DPLL(T)-based Verification tool for DNNs
🎯 Enhanced Adversarial Patch: Extends adversarial attacks on TartanVO model with a new loss function, rotation attack, and momentum optimizer
Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
Official implementation of Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks
Code, model and data for our paper: K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", Proc. ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) at the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024.
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
The all-in-one tool for comprehensive experimentation with adversarial attacks on image recognition.
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."