Skip to content

Code for Backdoor Attacks Against Dataset Distillation

License

Notifications You must be signed in to change notification settings

liuyugeng/baadd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Backdoor Attacks Against Dataset Distillation

arXiv PyTorch

This is official code of our NDSS 23 paper Backdoor Attacks Against Dataset Distillation. Currently, we apply two distillation techniques, namely Dataset Distillation (DD) and Dataset Condensation with Gradient Matching (DC). In the project, we propose three different backdoor attacks, NAIVEATTACK, DOORPING, and INVISIBLE. NAIVEATTACK inserts a pre-defined trigger into the original training dataset before the distillation. DOORPING is an advanced method, which optimizes the trigger during the distillation process. Required by the reviewers, we need to add another backdoor method. So, we choose Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization.

Limited by the DD code, PyTorch 2.0 are not supported.

Requirments

A suitable conda environment named baadd can be created and activated with:

conda env create -f environment.yaml
conda activate baadd

Run Backdoor Attacks against DD

We support five different dataset: Fashion-MNIST (FMNIST), CIFAR10, CIFAR100, STL10, and SVHN. And two attack architectures: AlexNet and ConvNet

Due to the different arguments between DD and DC code, we list the arguments in the following:

Dataset Name Fashion-MNIST CIFAR10 CIFAR100 STL10 SVHN
Arguments FashionMNIST Cifar10 Cifar100 STL10 SVHN
Model Architecture AlexNet ConvNet
Arguments AlexCifarNet ConvNet

For NAIVEATTACK, run this mode via

python DD/main.py --mode distill_basic --dataset Cifar10 --arch AlexCifarNet --distill_lr 0.001 --naive --dataset_root /path/to/data --results_dir /path/to/results

For DOORPING, run this mode via

python DD/main.py --mode distill_basic --dataset Cifar10 --arch AlexCifarNet --distill_lr 0.001 --doorping --dataset_root /path/to/data --results_dir /path/to/results

For INVISIBLE, run this mode via

python DD/main.py --mode distill_basic --dataset Cifar10 --arch AlexCifarNet --distill_lr 0.001 --invisible --dataset_root /path/to/data --results_dir /path/to/results

Run Backdoor Attacks against DC

Dataset Name Fashion-MNIST CIFAR10 CIFAR100 STL10 SVHN
Arguments FashionMNIST CIFAR10 CIFAR100 STL10 SVHN
Model Architecture AlexNet ConvNet
Arguments AlexNet ConvNet

For NAIVEATTACK, run this mode via

python DC/main.py --dataset CIFAR10 --model AlexNet --naive --data_path /path/to/data --save_path /path/to/results

For DOORPING, run this mode via

python DC/main.py --dataset CIFAR10 --model AlexNet --doorping --data_path /path/to/data --save_path /path/to/results

For INVISIBLE, run this mode via

python DC/main.py --dataset CIFAR10 --model AlexNet --invisible --data_path /path/to/data --save_path /path/to/results

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{LLBSZ23,
author = {Yugeng Liu and Zheng Li and Michael Backes and Yun Shen and Yang Zhang},
title = {{Backdoor Attacks Against Dataset Distillation}},
booktitle = {{NDSS}},
year = {2023}
}

About

Code for Backdoor Attacks Against Dataset Distillation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages