Skip to content
ConvexPolytopePosioning
Python Shell
Branch: master
Clone or download
Latest commit 10bff71 Jun 8, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
launch
models init Jun 8, 2019
.gitignore
README.md init Jun 8, 2019
SignedAdam.py
craft_poisons_transfer.py
dataloader.py
train_cifar10_models.py
trainer.py init Jun 8, 2019
utils.py

README.md

ConvexPolytopePosioning

This repository provides codes to reproduce the major experiments in the paper Transferable Clean-Label Poisoning Attacks on Deep Neural Nets, (ICML 2019).

If you find this code useful for your research you could cite

@inproceedings{zhu2019transferable,
  title={Transferable Clean-Label Poisoning Attacks on Deep Neural Nets},
  author={Zhu, Chen and Huang, W Ronny and Shafahi, Ali and, Li, Hengduo and Taylor, Gavin and Studer, Christoph and Goldstein, Tom},
  booktitle={International Conference on Machine Learning},
  pages={7614--7623},
  year={2019}
}

Prerequisites

The experiments can be reproduced with PyTorch 1.0.1 and CUDA 9.0 on Ubuntu 16.04.

Before running any experiments, please download our split of the CIFAR10 dataset here, create a directory datasets/ and move the file into datasets/. One example for doing that is executing the following under this project directory

mkdir datasets && cd datasets && wget https://www.dropbox.com/s/raw/451maqtq716ggr4/CIFAR10_TRAIN_Split.pth

We also provide most of the substitute and victim models that we used for our experiments via Dropbox. You could also train any substitute model as used in the paper with train_cifar10_models.py, where we have tweaked the code from kuangliu to add Dropout operations into the networks and choosing different subsets of the training data. One example for running the training is:

python train_cifar10_models.py --gpu 0 --net ResNet50 --train-dp 0.25  --sidx 0 --eidx 4800

Feel free to contact us if you have any question or find anything missing.

Launch

We will add more examples as well as the poisons we used in the paper soon, but here are some simple examples we have cleared up.

To attack the transfer learning setting:

bash launch/attack-transfer.sh

To attack the end-to-end setting:

bash launch/attack-end2end.sh
You can’t perform that action at this time.