Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification
This repository will contain the code including pre-trained models for our paper "Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification". We implement all backdoor detection & repair methods and attacks presented in the paper.
(Jun 4) The code will be released soon.
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification Nils Lukas and Florian Kerschbaum. Preprint.
Please consider citing the following papers if you found our work useful.
@article{lukas2023pick,
title={Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification},
author={Lukas, Nils and Kerschbaum, Florian},
journal={arXiv preprint arXiv:2305.09671},
year={2023}
}