We develop a theoretic understanding of the factors contributing to effective poisoning backdoor attacks. This repo holds our code that conducts both simulated and real-world data experiments to demonstrate the developed theory.
src/experiment.py
: Run the experiments on synthetic two-dimensional Gaussian datasets.src/diffusion_backdoor.ipynb
: Backdoor attacks for diffusion models on MNIST.
Two scripts were tested in an environment with PyTorch 2.0.1, CUDA 11.8, and Python 3.10.
Ganghua Wang, Xun Xian, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, and Jie Ding. “Demystifying Poisoning Backdoor Attacks from a Statistical Perspective,” International Conference on Learning Representations (ICLR), 2024. link
@article{wang2024demystify,
title={Demystifying Poisoning Backdoor Attacks from a Statistical Perspective},
author={Wang, Ganghua and Xian, Xun and Srinivasa, Jayanth and Kundu, Ashish and Bi, Xuan and Hong, Mingyi and Ding, Jie},
journal={Proc. ICLR},
year={2024} }
If you have any questions, please feel free to contact us or submit an issue.
- Ganghua Wang: wang9019@umn.edu
- Xun Xian: xian0044@umn.edu