Skip to content

Latest commit

 

History

History
411 lines (346 loc) · 27.7 KB

reproduce.md

File metadata and controls

411 lines (346 loc) · 27.7 KB

This document provides guidance to reproduce our major experimental results our paper Towards A Proactive ML Approach for Detecting Backdoor Poison Samples.

Setup

File Directory

├── assets                          # directory for figures
|   ├── overview.png                # overview figure from our paper
|   └── ...                         # visualization figures generated by scripts
|
├── clean_set                       # directory for reserved clean data and validation data
|   |                               # run `python create_clean_set.py -dataset $DATASET` to initialize
|   ├── cifar10
|   |   ├── clean_split             # reserved clean data
|   |   └── test_split              # validation data
|   ├── gtsrb
|   ├── ember
|   └── imagenet
|
├── data                            # original dataset / metainfo
|   ├── cifar10
|   |   ├── clean_label             # adversarial perturbed images of clean label attack
|   |   |   └── setup.sh            # downloading script
|   |   └── ...                     # standard cifar10 data
|   ├── gtsrb
|   ├── ember
|   └── imagenet
|
├── logs                            # directory for logging files
|
├── misc                            # directory for documents
|   ├── reproduce.md                # this document
|   └── ...
|
├── models                          # directory for some model checkpoints
|   ├── 6_CNN_CIF1R10.h5py          # pretrained model for Frequency defense
|   ├── ISSBA_cifar10.pth           # pretrained model for ISSBA attack
|   └── ...                         # other pretrained models to be downloaded
|
├── other_cleansers                 # directory for baseline backdoor defenses (poison set cleansers)
|   ├── spectral_signature.py
|   └── ...
├── other_defenses_tool_box         # directory for baseline backdoor defenses (not poison set cleansers)
|   ├── neural_cleanse.py
|   └── ...
|
├── poison_tool_box                 # directory for poisoning backdoor attacks 
|   ├── badnet.py
|   └── ...
|
├── poisoned_train_set              # directory for poisoned training set and etc.
|   ├── cifar10
|   |   ├── badnet_0.003_poison_seed=0      # e.g. BadNet attack (0.3% poisoning rate)
|   |   |   ├── data                        # poisoned images
|   |   |   ├── labels                      # corresponding labels for the poisoned images
|   |   |   ├── poison_indices              # recording which images in 'data' are poisoned
|   |   |   └── full_base_aug_seed=2333.pt  # backdoored model (after training)
|   |   └── ...                     # other poisoned directory
|   └── gtsrb                       # directory for backdoor triggers (and masks)
|
├── triggers
|   ├── badnet_patch_32.png         # BadNet trigger
|   ├── mask_badnet_patch_32.png    # BadNet trigger mask
|   └── ...
|
├── utils                           # directory for many utils
|   ├── default_args.py             # default settings for the arg options of our scripts
|   ├── gradcam_utils.py            # external GradCAM library (for SentiNet defense)
|   ├── gradcam.py
|   ├── imagenet.py                 # ImageNet data tools
|   ├── resnet.py                   # ResNet architecture
|   ├── wresnet.py                  # WideResNet architecture
|   ├── mobilenetv2.py              # MobileNetV2 architecture
|   ├── vgg.py                      # VGG architecture
|   ├── ember_nn.py                 # EmberNN architecture (for ember dataset)
|   ├── supervisor.py               # providing important APIs to use
|   └── tools.py                    # other tool functions
|
├── .gitignore
├── config.py                       # configuration file
├── create_clean_set.py             # script for initializing reserved clean data and validation data   
├── create_poisoned_set_imagenet.py # script for poisoning ImageNet dataset
├── create_poisoned_set.py          # script for poisoning CIFAR10 and GTSRB datasets
├── ct_cleanser.py                  # script to launch confusion training (CIFAR10 and GTSRB datasets)
├── ct_cleanser_imagenet.py         # script to launch confusion training (ImageNet dataset)
├── ct_cleanser_ember.py            # script to launch confusion training (Ember dataset)
├── confusion_training.py           # util functions for confusion training
├── other_cleanser.py               # script to launch baseline defenses (poison set cleansers)
├── other_defense.py                # script to launch other defenses (not poison set cleansers)
├── test_model.py                   # script to evaluate a model's performance
├── train_on_poisoned_set.py        # script to train a backdoored model on a poisoned training set
├── train_on_cleansed_set.py        # script to (re)train a model on a cleansed training set
├── visualize.py                    # script to visualize a model's latent space w.r.t. clean and poison samples
├── README.md
└── requirement.txt

Hardware

Our artifact is compatible with common hardware settings, only specifically requiring NVIDIA GPU support. Our experiment environments are equipped with Intel CPU (≥32 cores) and ≥2 Nvidia A100 GPUs.

Dependency

Our experiments are conducted with PyTorch 1.12.1, and should be compatible with PyTorch of newer versions. To reproduce our defense, first manually install PyTorch with CUDA, and then install other packages via pip install -r requirement.txt.

TODO before You Start

A Gentle Start on CIFAR10

To help readers get to know the overall pipeline of our artifact, we first illustrate an example by showing how to launch and defend against BadNet attack on CIFAR10 (corresponding to BadNet lines in Table 1 and Table 2 of the paper).

All our scripts adopt command-line options using argparse.

Step 1: Create a poisoned training set.

python create_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

Step 2: Train a backdoored model on this poisoned training set.

python train_on_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

This step requires ~0.5 A100 GPU hour. The model checkpoint will be automatically saved to poisoned_train_set/cifar10/badnet_0.003_poison_seed=0/full_base_aug_seed=2333.pt.

After training, you may evaluate the trained model's performance (ACC & ASR) via:

python test_model.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

You may also visualize the latent space of the backdoor model (like Fig 2) w.r.t. clean and poison samples via:

python visualize.py -method=tsne -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

Step 3: Defend against the BadNet attack.

To launch our confusion training defense, run script:

# Cleanse the poisoned training set (results in Table 1)
python ct_cleanser.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003 -devices=0,1 -debug_info

# Retrain a benign model on the cleansed training set (results in Table 2)
python train_on_cleansed_set.py -cleanser=CT -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

The first command (confusion training) requires ~1.5 A100 GPU hours, and the second command (retrain) requires ~0.5 A100 GPU hour.

To launch baseline defenses (poison set cleanser), run script:

# Cleanse the poisoned training set (results in Table 1)
python other_cleanser.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=badnet -poison_rate=0.003 # $CLEANSER = ['SCAn', 'AC', 'SS', 'Strip', 'SPECTRE', 'SentiNet', 'Frequency']

# Retrain a benign model on the cleansed training set (results in Table 2)
python train_on_cleansed_set.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=badnet -poison_rate=0.003

The first command (other poison set cleansers) generally requires minute-level GPU time, except that 'SentiNet' defense requires >15 A100 GPU hours. The second command (retrain) similarly requires ~0.5 A100 GPU hour.

And to launch other baseline defenses (not poison set cleanser), run script:

# (results in Table 2)
python other_defense.py -defense=$DEFENSE -dataset=cifar10 -poison_type=badnet -poison_rate=0.003 # $DEFENSE = ['ABL', 'NC', 'NAD', 'FP']

where all these defenses requires <0.5 A100 GPU hours.

Reproduction Commands

Major Results on CIFAR10 and GTSRB (Table 1 & Table 2)

The following command snippet includes all commands to reproduce our major results on CIFAR10 and GTSRB, corresponding to all the entries (11 attacks x 12 defenses on CIFAR10, 9 attacks x 12 defenses on GTSRB) in Table 1 and Table 2. Finishing all these commands requires roughly >100 A100 GPU hours in total. If your resource is limited, we suggest you subsample a few attacks (e.g. badnet, blend, trojan, clean_label) and only run experiments for them.

python create_poisoned_set.py -dataset cifar10 -poison_type none
python create_poisoned_set.py -dataset cifar10 -poison_type badnet -poison_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type blend -poison_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type trojan -poison_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type clean_label -poison_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type SIG -poison_rate 0.02
python create_poisoned_set.py -dataset cifar10 -poison_type dynamic -poison_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type ISSBA -poison_rate 0.02
python create_poisoned_set.py -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
python create_poisoned_set.py -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003
python create_poisoned_set.py -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006
python create_poisoned_set.py -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15
python create_poisoned_set.py -dataset gtsrb -poison_type none
python create_poisoned_set.py -dataset gtsrb -poison_type badnet -poison_rate 0.01
python create_poisoned_set.py -dataset gtsrb -poison_type blend -poison_rate 0.01
python create_poisoned_set.py -dataset gtsrb -poison_type trojan -poison_rate 0.01
python create_poisoned_set.py -dataset gtsrb -poison_type SIG -poison_rate 0.02
python create_poisoned_set.py -dataset gtsrb -poison_type dynamic -poison_rate 0.003
python create_poisoned_set.py -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
python create_poisoned_set.py -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005
python create_poisoned_set.py -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01
python create_poisoned_set.py -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15


## the following commands produce the first column of Table 2 (No Defense)
python train_on_poisoned_set.py -dataset cifar10 -poison_type none
python train_on_poisoned_set.py -dataset cifar10 -poison_type badnet -poison_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type blend -poison_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type trojan -poison_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type clean_label -poison_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type SIG -poison_rate 0.02
python train_on_poisoned_set.py -dataset cifar10 -poison_type dynamic -poison_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type ISSBA -poison_rate 0.02
python train_on_poisoned_set.py -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
python train_on_poisoned_set.py -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003
python train_on_poisoned_set.py -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006
python train_on_poisoned_set.py -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
python train_on_poisoned_set.py -dataset gtsrb -poison_type none
python train_on_poisoned_set.py -dataset gtsrb -poison_type badnet -poison_rate 0.01
python train_on_poisoned_set.py -dataset gtsrb -poison_type blend -poison_rate 0.01
python train_on_poisoned_set.py -dataset gtsrb -poison_type trojan -poison_rate 0.01
python train_on_poisoned_set.py -dataset gtsrb -poison_type SIG -poison_rate 0.02
python train_on_poisoned_set.py -dataset gtsrb -poison_type dynamic -poison_rate 0.003
python train_on_poisoned_set.py -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
python train_on_poisoned_set.py -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005
python train_on_poisoned_set.py -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01
python train_on_poisoned_set.py -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2


## the following commands produce results of Table 1 (TPR and FPR)
### CT (ours), the last column
python ct_cleanser.py -dataset cifar10 -poison_type none -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type badnet -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type blend -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type trojan -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type clean_label -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type SIG -poison_rate 0.02 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type dynamic -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type ISSBA -poison_rate 0.02 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006 -devices=0,1 -debug_info
python ct_cleanser.py -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type none -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type badnet -poison_rate 0.01 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type blend -poison_rate 0.01 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type trojan -poison_rate 0.01 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type SIG -poison_rate 0.02 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type dynamic -poison_rate 0.003 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01 -devices=0,1 -debug_info
python ct_cleanser.py -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2 -devices=0,1 -debug_info
### other baseline defense columns
for CLEANSER in SentiNet STRIP SS AC Frequency SCAn SPECTRE
do
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type none
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type badnet -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type blend -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type trojan -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type clean_label -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type SIG -poison_rate 0.02
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type dynamic -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type ISSBA -poison_rate 0.02
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006
    python other_cleanser.py -cleanser $CLEANSER -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type none
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type badnet -poison_rate 0.01
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type blend -poison_rate 0.01
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type trojan -poison_rate 0.01
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type SIG -poison_rate 0.02
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type dynamic -poison_rate 0.003
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01
    python other_cleanser.py -cleanser $CLEANSER -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
done

## the following commands produce results of Table 2 (ACC and ASR)
### CT (ours) and baseline defenses (poison set cleansers), corresponding to the last column and colomn 2~8
for CLEANSER in CT SentiNet STRIP SS AC Frequency SCAn SPECTRE
do
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type none
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type badnet -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type blend -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type trojan -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type clean_label -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type SIG -poison_rate 0.02
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type dynamic -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type ISSBA -poison_rate 0.02
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type none
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type badnet -poison_rate 0.01
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type blend -poison_rate 0.01
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type trojan -poison_rate 0.01
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type SIG -poison_rate 0.02
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type dynamic -poison_rate 0.003
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01
    python train_on_cleansed_set.py -cleanser $CLEANSER -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
done
### other baseline defenses (not poison set cleansers), corresponding to colomn 9~12
for DEFENSE in FP NC ABL NAD
do
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type none
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type badnet -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type blend -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type trojan -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type clean_label -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type SIG -poison_rate 0.02
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type dynamic -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type ISSBA -poison_rate 0.02
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type TaCT -poison_rate 0.003 -cover_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type adaptive_patch -poison_rate 0.003 -cover_rate 0.006
    python other_defense.py -defense $DEFENSE -dataset cifar10 -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type none
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type badnet -poison_rate 0.01
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type blend -poison_rate 0.01
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type trojan -poison_rate 0.01
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type SIG -poison_rate 0.02
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type dynamic -poison_rate 0.003
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type WaNet -poison_rate 0.05 -cover_rate 0.1
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type TaCT -poison_rate 0.005 -cover_rate 0.005
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type adaptive_patch -poison_rate 0.005 -cover_rate 0.01
    python other_defense.py -defense $DEFENSE -dataset gtsrb -poison_type adaptive_blend -poison_rate 0.003 -cover_rate 0.003 -alpha 0.15 -test_alpha 0.2
done

Experiments on Ember and ImageNet

Ember (Table 4)

### reserved clean set for Ember
python create_clean_set.py -dataset ember -clean_budget 5000

### Train Backdoored Models (without defense)
python train_on_poisoned_set.py -dataset ember -ember_options none
python train_on_poisoned_set.py -dataset ember -ember_options constrained
python train_on_poisoned_set.py -dataset ember -ember_options unconstrained

### Cleanse Poisoned Dataset with Confusion Training (CT)
python ct_cleanser_ember.py -ember_options=none -debug_info
python ct_cleanser_ember.py -ember_options=constrained -debug_info
python ct_cleanser_ember.py -ember_options=unconstrained -debug_info

### Train on Cleansed Dataset to get clean model
python train_on_cleansed_set.py -cleanser=CT -dataset=ember -ember_options=none
python train_on_cleansed_set.py -cleanser=CT -dataset=ember -ember_options=constrained
python train_on_cleansed_set.py -cleanser=CT -dataset=ember -ember_options=unconstrained

ImageNet (Table 5)

### reserved clean set for ImageNet
python create_clean_set.py -dataset imagenet -clean_budget 5000

### creating poisoned datasets
python create_poisoned_set_imagenet.py -poison_type none # clean dataset
python create_poisoned_set_imagenet.py -poison_type badnet -poison_rate 0.01 
python create_poisoned_set_imagenet.py -poison_type blend -poison_rate 0.01
python create_poisoned_set_imagenet.py -poison_type trojan -poison_rate 0.01

### Train Backdoored Models (without defense) 
python train_on_poisoned_set.py -dataset=imagenet -poison_type=none -devices=0,1
python train_on_poisoned_set.py -dataset=imagenet -poison_type=badnet -poison_rate=0.01 -devices=0,1
python train_on_poisoned_set.py -dataset=imagenet -poison_type=blend -poison_rate=0.01 -devices=0,1
python train_on_poisoned_set.py -dataset=imagenet -poison_type=trojan -poison_rate=0.01 -devices=0,1

### Cleanse The Poisoned Dataset with Confusion Training (CT)
python ct_cleanser_imagenet.py -poison_type=none -devices=0,1 -debug_info
python ct_cleanser_imagenet.py -poison_type=badnet -poison_rate=0.01 -devices=0,1 -debug_info
python ct_cleanser_imagenet.py -poison_type=blend -poison_rate=0.01 -devices=0,1 -debug_info
python ct_cleanser_imagenet.py -poison_type=trojan -poison_rate=0.01 -devices=0,1 -debug_info


### Train on Cleansed Dataset
python train_on_cleansed_set.py -cleanser=CT -dataset=imagenet -poison_type=none -devices=0,1
python train_on_cleansed_set.py -cleanser=CT -dataset=imagenet -poison_type=badnet -poison_rate=0.01 -devices=0,1
python train_on_cleansed_set.py -cleanser=CT -dataset=imagenet -poison_type=blend -poison_rate=0.01 -devices=0,1
python train_on_cleansed_set.py -cleanser=CT -dataset=imagenet -poison_type=trojan -poison_rate=0.01 -devices=0,1