Experiments on Data Poisoning Regression Learning
-
Updated
Oct 5, 2020 - Python
Experiments on Data Poisoning Regression Learning
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
A backdoor attack in a Federated learning setting using the FATE framework
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
A repository for the experimental framework for in-stream data poisoning monitoring.
MIT IEEE URTC 2023. GSET 2023. Repository for "SeBRUS: Mitigating Data Poisoning in Crowdsourced Datasets with Blockchain".
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
Measure and Boost Backdoor Robustness
A curated list of academic events on AI Security & Privacy
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them
Add a description, image, and links to the data-poisoning topic page so that developers can more easily learn about it.
To associate your repository with the data-poisoning topic, visit your repo's landing page and select "manage topics."