Home

Awesome

Adv-Makeup

logo

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition (IJCAI2021)

Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li and Cong Liu

Tencent Youtu Lab

(Official PyTorch Implementation)

Update - July 13, 2021

Introduction

Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to black-box models, or fail to be implemented in practice. In this paper, we propose a unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting. Adv-Makeup develops a task-driven makeup generation method with the blending module to synthesize imperceptible eye shadow over the orbital region on faces. And to achieve transferability, Adv-Makeup implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models. Compared to existing techniques, sufficient visualization results demonstrate that Adv-Makeup is capable to generate much more imperceptible attacks under both digital and physical scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup can significantly improve the attack success rate under black-box setting, even attacking commercial systems. In addition, our paper is accepted by the IJCAI 2021, the top artificial intelligence conference all over the world.

RealSR

If you are interested in this work, please cite our paper

@article{yin2021adv,
  title={Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition},
  author={Yin, Bangjie and Wang, Wenxuan and Yao, Taiping and Guo, Junfeng and Kong, Zelun and Ding, Shouhong and Li, Jilin and Liu, Cong},
  journal={arXiv preprint arXiv:2105.03162},
  year={2021}
}

Digital Visual Results

0

Physical Visual Results

0

Quantitative Results Compared with Other Competing Methods

0

Dependencies and Installation

Dependencies and Installation:

Pre-trained models

Training

  1. prepare training
    • Provided dataset directory './Datasets_Makeup' has included everything for training, the entire dataset can be found here
    • LFW dataset needs to be prepared by yourself following the setting in the paper and the structure of the directory './Datasets_Makeup'
    • Download the face recognition and VGG models, and put them into the current directory './'
  2. train Adv-Makeup model
    • Modify the config.py according to your own training setting
    • Run command : python3 train.py

Testing

Test the trained Adv-Makeup model and output the attacking success rate under different black-box model, run command :

python3 test.py

This will save all the generated adversarial face examples with the specific eye makeup into the dataset directory

Generate the better visual-effect adversarial face images with the post-processing Poisson blending, by going to the folder './Poisson_Image_Editing' and running command:

python3 poisson_image_editing_makeup.py

the results will be saved into the dataset directory