Home

Awesome

Spectrum Simulation Attack (ECCV'2022 ORAL)

This repository is the official Pytorch code implementation for our paper Frequency Domain Model Augmentation for Adversarial Attack. In this paper, we propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our method, e.g., attacking nine state-of-the-art defense models with an average success rate of 95.4%.

Motivation

  1. All of existing model augmentation methods investigate relationships of different models in spatial domain, which may overlook the essential differences between them.
  2. To better uncover the differences among models, we introduce the spectrum saliency map (see Sec. 3.2) from a frequency domain perspective since representation of images in this domain have a fixed pattern, e.g., low-frequency components of an image correspond to its contour.
  3. As illustrated in Figure 1 (d~g), spectrum saliency maps (See Sec. 3.2) of different models significantly vary from each other, which clearly reveals that each model has different interests in the same frequency component.

==> Motivated by these, we consider tuning the spectrum saliency map to simulate more diverse substitute models, thus generating more transferable adversarial examples. image-20220712192323395

Requirements

Implementation

Results

image-20220712192349277

Citation

If you find this work is useful for your research, please consider citing our paper:

@inproceedings{Long2022ssa,
  author    = {Yuyang Long and 
               Qilong Zhang and 
               Boheng Zeng and
               Lianli Gao and 
               Xianglong Liu and 
               Jian Zhang and 
               Jingkuan Song},
  title     = {Frequency Domain Model Augmentation for Adversarial Attack},
  Booktitle = {European Conference on Computer Vision},
  year      = {2022}
}