Home

Awesome

Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)

Official repostory for Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023).

Refer to https://github.com/vtu81/backdoor-toolbox for a more comprehensive backdoor research code repository, which includes our adaptive attacks, together with various other attacks and defenses.

Attacks

Our proposed adaptive attacks:

Some other baselines include:

See poison_tool_box/ for details.

Defenses

We also include some backdoor defenses, including poison samples cleansers and other types of backdoor defenses. See other_cleansers/ and other_defenses/ for details.

Poison Cleansers

Other Defenses

Visualization

Visualize the latent space of backdoor models. See visualize.py.

Quick Start

Take launching and defending an Adaptive-Blend attack as an example:

# Create a clean set (for testing and some defenses)
python create_clean_set.py -dataset=cifar10

# Create a poisoned training set
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Train on the poisoned training set
python train_on_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003
python train_on_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003 -no_aug

# Visualize
## $METHOD = ['pca', 'tsne', 'oracle']
python visualize.py -method=$METHOD -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Cleanse poison train set with cleansers
## $CLEANSER = ['SCAn', 'AC', 'SS', 'Strip', 'SPECTRE']
## Except for 'CT', you need to train poisoned backdoor models first.
python other_cleanser.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Retrain on cleansed set
## $CLEANSER = ['SCAn', 'AC', 'SS', 'Strip', 'SPECTRE']
python train_on_cleansed_set.py -cleanser=$CLEANSER -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

# Other defenses
## $DEFENSE = ['ABL', 'NC', 'STRIP', 'FP']
## Except for 'ABL', you need to train poisoned backdoor models first.
python other_defense.py -defense=$DEFENSE -dataset=cifar10 -poison_type=adaptive_blend -poison_rate=0.003 -cover_rate=0.003

Notice:

Some other poisoning attacks we compare in our papers:

# No Poison
python create_poisoned_set.py -dataset=cifar10 -poison_type=none -poison_rate=0
# BadNet
python create_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.003
# Blend
python create_poisoned_set.py -dataset=cifar10 -poison_type=blend -poison_rate=0.003
# Adaptive Patch
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_patch -poison_rate=0.003 -cover_rate=0.006
# Adaptive K Way
python create_poisoned_set.py -dataset=cifar10 -poison_type=adaptive_k_way -poison_rate=0.003 -cover_rate=0.003

You can also: