Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
-
Updated
Jul 9, 2024 - Zig
Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
🎃 PumpBin is an Implant Generation Platform.
Adversary Emulation Framework
QROA: A Black-Box Query-Response Optimization Attack on LLMs
A New Context-Aware Framework for Defending Against Adversarial Attacks in Hyperspectral Image Classification (IEEE TGRS 2023)
Universal Adversarial Perturbations for Vision-Language Pre-trained Models
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Quadratic Upper Bound Loss for Robust Adversarial Training
The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
The largest and most challenging benchmark for machine-generated text detectors. (ACL 2024)
A suite for hunting suspicious targets, expose domains and phishing discovery
Python toolkit for speech processing
Generate adversarial patches against YOLOv5 🚀
[Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness
[NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
DeepDefend is an open-source Python library for adversarial attacks and defenses in deep learning models, enhancing the security and robustness of AI systems.
a collection of small machine learning projects
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."