Skip to content


Repository files navigation

Update Jan 10, 2020

Fixed a bug with end2end training. args.end2end were not passed to make_convex_polytope_poisons before. Please try again if you did not find the end2end setting working!


This repository provides codes to reproduce the major experiments in the paper Transferable Clean-Label Poisoning Attacks on Deep Neural Nets, (ICML 2019).

If you find this code useful for your research you could cite

  title={Transferable Clean-Label Poisoning Attacks on Deep Neural Nets},
  author={Zhu, Chen and Huang, W Ronny and Shafahi, Ali and, Li, Hengduo and Taylor, Gavin and Studer, Christoph and Goldstein, Tom},
  booktitle={International Conference on Machine Learning},


The experiments can be reproduced with PyTorch 1.0.1 and CUDA 9.0 on Ubuntu 16.04.

Before running any experiments, please download our split of the CIFAR10 dataset here, create a directory datasets/ and move the file into datasets/. One example for doing that is executing the following under this project directory

mkdir datasets && cd datasets && wget

We also provide most of the substitute and victim models that we used for our experiments via Dropbox. You could also train any substitute model as used in the paper with, where we have tweaked the code from kuangliu to add Dropout operations into the networks and choosing different subsets of the training data. One example for running the training is:

python --gpu 0 --net ResNet50 --train-dp 0.25  --sidx 0 --eidx 4800

Feel free to contact us if you have any question or find anything missing.


We will add more examples as well as the poisons we used in the paper soon, but here are some simple examples we have cleared up.

To attack the transfer learning setting:

bash launch/

To attack the end-to-end setting:

bash launch/


No releases published


No packages published