Home

Awesome

This repository contains the code of our paper "Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression (ICML 2023)".

Overview

Our paper verifies that 12 state-of-the-art Perturbative Availability Poisoning (PAP) methods are vulnerable to Image Shortcut Squeezing (ISS), which is based on simple compression (i.e., grayscale and JPEG compressions). For example, on average, ISS restores the CIFAR-10 model accuracy to 81.73%, surpassing the previous best preprocessing-based countermeasures by 37.97% absolute. We hope that further studies could consider various (simple) countermeasures during the development of new poisoning methods.

Categorization of existing poisoning methods

We carry out a systematic analysis of compression-based countermeasures for PAP. We identify the strong dependency of the perturbation frequency patterns on the surrogate model property: perturbations that are generated on slightly-trained surrogates exhibit spatially low-frequency patterns, while poisons that are generated on fully-trained surrogates exhibit spatially high-frequency patterns, as shown in the figure below.

<img src="/images/examples.png" alt="examples">

Evaluation results of ISS against 12 existing PAP methods.

Poisons \ ISSw/oGrayscaleJPEG-10
Clean (no poison)94.6892.4185.38
Deep Confuse $(L_{\infty} = 8)$16.3093.0781.84
NTGA $(L_{\infty} = 8)$42.4674.3269.49
EM $(L_{\infty} = 8)$21.0593.0181.50
REM $(L_{\infty} = 8)$25.4492.8481.50
ShortcutGen $(L_{\infty} = 8)$33.0586.4279.49
TensorClog $(L_{\infty} = 8)$88.7079.7585.29
Hypocritical $(L_{\infty} = 8)$71.5461.8685.45
TAP $(L_{\infty} = 8)$8.179.1183.87
SEP $(L_{\infty} = 8)$3.853.5784.37
LSP $(L_{2} = 1.0)$19.0782.4783.01
AR $(L_{2} = 1.0)$13.2834.0485.15
OPS $(L_{0} = 1)$36.5542.4482.53

How to apply ISS on poisons?

Prepare poisoned images as .png files in folder PATH/TO/POISON_FOLDER following the order of original CIFAR-10 dataset.

To train on grayscaled poisons:

python main.py --exp_type $TYPEOFPOISONS --poison_path PATH/TO/POISON_FOLDER --poison_rate 1 --net resnet18 --grayscale True --exp_path PATH/TO/SAVE/RESULTS/

To train on JPEG compressed poisons:

python main.py --exp_type $TYPEOFPOISONS --poison_path PATH/TO/POISON_FOLDER --poison_rate 1 --net resnet18 --jpeg 10 --exp_path PATH/TO/SAVE/RESULTS/

An example for quick start:

We provide an example of applying ISS on CIFAR-10 poisons generated by Targeted Adversarial Poisoning (TAP). Poisons are generated by using the official TAP GitHub repository. Poisoned images by TAP are included in data/TAP/.

bash train.sh, will start to train on TAP poisons with JPEG-10 and results can be found in experiments/TAP/jpeg10/

Classification performance when training solely on TAP poisons can be checked by running:

python main.py --poison_type TAP --exp_path ./experiments/TAP/TAP_poisoned --poison_path ./data/TAP/

Cite our work:

Please cite our paper if you use this implementation in your research.

@misc{liu2023image,
      title={Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression}, 
      author={Zhuoran Liu and Zhengyu Zhao and Martha Larson},
      year={2023},
      eprint={2301.13838},
      archivePrefix={arXiv}
}

Acknowledgement:

Training code adapted from kuangliu's repository Train CIFAR10 with PyTorch