Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
-
Updated
Aug 2, 2023 - Python
Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
The Security Toolkit for LLM Interactions
A Toolbox for Adversarial Robustness Research
A curated list of useful resources that cover Offensive AI.
A curated list of adversarial attacks and defenses papers on graph-structured data.
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
T2F: text to face generation using Deep Learning
Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
Papers and resources related to the security and privacy of LLMs 🤖
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
GraphGallery is a gallery for benchmarking Graph Neural Networks
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
Provable adversarial robustness at ImageNet scale
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
A curated list of trustworthy deep learning papers. Daily updating...
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Add a description, image, and links to the adversarial-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-machine-learning topic, visit your repo's landing page and select "manage topics."