Home

Awesome

TAFIM: Targeted Adversarial Attacks for Facial Forgery Detection<br><sub>Official PyTorch implementation of ECCV 2022 paper</sub>

Teaser Image

Targeted Adversarial Attacks for Facial Forgery Detection<br> Shivangi Aneja, Lev Markhasin, Matthias Niessner<br> https://shivangi-aneja.github.io/projects/tafim <br>

Abstract: Face manipulation methods can be misused to affect an individual’s privacy or to spread disinformation. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. In addition, we propose to leverage differentiable compression approximation, hence making generated perturbations robust to common image compression. In order to prevent against multiple manipulation methods simultaneously, we further propose a novel attention-based fusion of manipulation-specific perturbations. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones.

Getting started

Pre-requisites

Installation

Pre-trained Models

Please download these models, as they will be required for experiments.

PathDescription
pSp EncoderpSp trained with the FFHQ dataset for StyleGAN inversion.
StyleClipStyleClip trained with the FFHQ dataset for text-manipulation (Afro, Angry, Beyonce, BobCut, BowlCut, Curly Hair, Mohawk, Purple Hair, Surprised, Taylor Swift, Trump, zuckerberg )
SimSwapSinSwap trained for face-swapping
SAMSAM model trained for age transformation (used in supp. material).
StyleGAN-NADAStyleGAN-Nada models (used in supp. material).

Training models

The code is well-documented and should be easy to follow.

# For self-reconstruction/style-mixing task
python -m trainer_scripts.train_protection_model_pSp 

# For face-swapping task
python -m trainer_scripts.train_protection_model_simswap

# For textual editing task
python -m trainer_scripts.train_protection_model_styleclip

# For protection against Jpeg Compression
python -m trainer_scripts.train_protection_model_pSp_jpeg

# For combining perturbations from multiple manipulation methods 
python -m trainer_scripts.train_protection_model_all_attention
    python -m testing_scripts.test_protection_model_pSp -p protection_model.pth
</br>

Citation

If you find our dataset or paper useful for your research , please include the following citation:


@InProceedings{aneja2022tafim,
               author="Aneja, Shivangi and Markhasin, Lev and Nie{\ss}ner, Matthias",
               title="TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations",
               booktitle="Computer Vision -- ECCV 2022",
               year="2022",
               publisher="Springer Nature Switzerland",
               address="Cham",
               pages="58--75",
               isbn="978-3-031-19781-9"
}
</br>

Contact Us

If you have questions regarding the dataset or code, please email us at shivangi.aneja@tum.de. We will get back to you as soon as possible.