Awesome
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
<div align="center"> <img src="Images/Fig1.png" width="800px" /> </div>
This is the code used in the paper PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning.
If you use the code or find this project helpful, please consider citing our paper.
@article{yang2020patchattack,
title={PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning},
author={Yang, Chenglin and Kortylewski, Adam and Xie, Cihang and Cao, Yinzhi and Yuille, Alan},
journal={arXiv preprint arXiv:2004.05682},
year={2020}
}
Requirements
- python 3.6
- pytorch 1.4.0
- easydict
- opencv
- matplotlib
- scikit-learn
- tqdm
- kornia 0.2.2
- jupyter (for PatchAttack_tutorial.ipynb)
Usage
Dictionaries
We privide TextureDict_ImageNet_0.zip, TextureDict_ImageNet_1.zip. Please download, unzip and merge the two directories, constituing the whole texture dictionary used in our paper. Alternatively, you can also generate one by yourself. First, please provide the paths to the train and val folder of ImageNet dataset and set cfg.ImageNet_train_dir and cfg.ImageNet_val_dir in parser.py. Second, you can optionally adjust the parameters in PatchAttack/PatchAttack_config.py to generate textures in different settings. Then, you can use the following commands to start the generation:
- Build Texture Dictionary:
# for classes of a range of labels
python main_build-dict.py --gpu 0 --t-data ImageNet --tdict-dir TextureDict --t-labels-range 0 1000
# for classes of some specific labels
python main_build-dict.py --gpu 0 --t-data ImageNet --tdict-dir TextureDict --t-labels 23 300 900
Additionally, we also provide a dictionary consisting of Adversarial Patches generated by a gradient-based method proposed in paper, which is AdvPatchDict_ImageNet.zip. This dictionary is generated using VGG19 and other settings are determined in PatchAttack/PatchAttack_config.py. You can change the settings and use the following commands to generate a different dictionary of white-box adversarial patches:
- Build AdversarialPatch Dictionary
# for classes of a range of labels
python main_build-dict.py --gpu 0 --arch VGG --depth 19 --t-data ImageNet --dict AdvPatch --tdict-dir AdvPatchDict --t-labels-range 0 1000
# for classes of some specific labels
python main_build-dict.py --gpu 0 --arch VGG --depth 19 --t-data ImageNet --dict AdvPatch --tdict-dir AdvPatchDict --t-labels 23 300 900
Attacks
Our implementation inludes three black-box patch attacks: Texture-based Patch Attack (TPA), MonoChrome Patch Attack (MPA) in our paper; Metropolis-Hastings Attack (HPA) originally proposed in paper. Besides, we also implement the white-box patch attack: Adversarial Patch Attack (AP) orginally proposed in paper. You can add the path to the folder 'PatchAttack' in this repository to PYTHONPATH in your local system, and use 'PatchAttack' as a package.
- PatchAttack_tutorial.ipynb explains how to perform these attacks. The prerequisite of running this tutorial is to download the text file of ImageNet_clsidx_to_labels to the root directory of this repository. Please refer to the notebook for details.
Defenses:
In our paper, we evaluate PatchAttack on two defense models: Denoise Network [paper - code] and Shape-biased Network [paper - code].
Acknowledgements
The part of Grad_CAM in this code is based on pytorch-grad-cam. A helper function comes from pytorch-classification.