Skip to content

Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)

Notifications You must be signed in to change notification settings

Unispac/Circumventing-Backdoor-Defenses

Repository files navigation

Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)

Official repostory for Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023).

Refer to https://github.com/vtu81/backdoor-toolbox for a more comprehensive backdoor research code repository, which includes our adaptive attacks, together with various other attacks and defenses.

Attacks

Our proposed adaptive attacks:

  • adaptive_blend: Adap-Blend attack with a single blending trigger
  • adaptive_patch: Adap-Patch attack with k different patch triggers
  • adaptive_k_way: Adap-K-Way attack, adaptive version of the k-way attack

Some other baselines include:

  • none: no attack
  • badnet: basic attack with badnet patch trigger
  • blend: basic attack with a single blending trigger

See poison_tool_box/ for details.

Defenses

We also include some backdoor defenses, including poison samples cleansers and other types of backdoor defenses. See other_cleansers/ and other_defenses/ for details.

Poison Cleansers

Other Defenses

Visualization

Visualize the latent space of backdoor models. See visualize.py.

Quick Start

Take launching and defending an Adaptive-Blend attack as an example:

# Create a clean set (for testing and some defenses)
python create_clean_set.py -dataset=cifar10

# Create a poisoned training set
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Train on the poisoned training set
python train_on_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003
python train_on_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003 -no_aug

# Visualize
## $METHOD = ['pca', 'tsne', 'oracle']
python visualize.py -method=$METHOD -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Cleanse poison train set with cleansers
## $CLEANSER = ['SCAn', 'AC', 'SS', 'Strip', 'SPECTRE']
## Except for 'CT', you need to train poisoned backdoor models first.
python other_cleanser.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Retrain on cleansed set
## $CLEANSER = ['SCAn', 'AC', 'SS', 'Strip', 'SPECTRE']
python train_on_cleansed_set.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Other defenses
## $DEFENSE = ['ABL', 'NC', 'STRIP', 'FP']
## Except for 'ABL', you need to train poisoned backdoor models first.
python other_defense.py -defense=$DEFENSE -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

Notice:

Some other poisoning attacks we compare in our papers:

# No Poison
python create_poisoned_set.py -dataset=cifar10 -poison_type=none -poison_rate=0
# BadNet
python create_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003
# Blend
python create_poisoned_set.py -dataset=cifar10 -poison_type=blend -poison_rate=0.003
# Adaptive Patch
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_patch -poison_rate=0.003 -cover_rate=0.006
# Adaptive K Way
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_k_way -poison_rate=0.003 -cover_rate=0.003

You can also:

  • train a vanilla model via
    python train_vanilla.py
  • test a trained model via
    python test_model.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003
    # other options include: -no_aug, -cleanser=$CLEANSER, -model_path=$MODEL_PATH, see our code for details
  • enforce a fixed running seed via -seed=$SEED option
  • change dataset to GTSRB via -dataset=gtsrb option
  • change model architectures in config.py
  • configure hyperparamters of other defenses in other_defense.py
  • see more configurations in config.py

About

Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published