Awesome
RECCE CVPR 2022
:page_facing_up: End-to-End Reconstruction-Classification Learning for Face Forgery Detection
:boy: Junyi Cao, Chao Ma, Taiping Yao, Shen Chen, Shouhong Ding, Xiaokang Yang
Please consider citing our paper if you find it interesting or helpful to your research.
@InProceedings{Cao_2022_CVPR,
author = {Cao, Junyi and Ma, Chao and Yao, Taiping and Chen, Shen and Ding, Shouhong and Yang, Xiaokang},
title = {End-to-End Reconstruction-Classification Learning for Face Forgery Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {4113-4122}
}
Introduction
This repository is an implementation for End-to-End Reconstruction-Classification Learning for Face Forgery Detection presented in CVPR 2022. In the paper, we propose a novel REConstruction-Classification lEarning framework called RECCE to detect face forgeries. The code is based on Pytorch. Please follow the instructions below to get started.
Motivation
Briefly, we train a reconstruction network over genuine images only and use the output of the latent feature by the encoder to perform binary classification. Due to the discrepancy in the data distribution between genuine and forged faces, the reconstruction differences of forged faces are obvious and also indicate the probably forged regions.
Basic Requirements
Please ensure that you have already installed the following packages.
- Pytorch 1.7.1
- Torchvision 0.8.2
- Albumentations 1.0.3
- Timm 0.3.4
- TensorboardX 2.1
- Scipy 1.5.2
- PyYaml 5.3.1
Dataset Preparation
- We include the dataset loaders for several commonly-used face forgery datasets, i.e., FaceForensics++, Celeb-DF, WildDeepfake, and DFDC. You can enter the dataset website to download the original data.
- For FaceForensics++, Celeb-DF, and DFDC, since the original data are in video format, you should first extract the facial images from the sequences and store them. We use RetinaFace to do this.
Config Files
- We have already provided the config templates in
config/
. You can adjust the parameters in the yaml files to specify a training process. More information is presented in config/README.md.
Training
- We use
torch.distributed
package to train the models, for more information, please refer to PyTorch Distributed Overview. - To train a model, run the following script in your console.
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 12345 train.py --config path/to/config.yaml
--config
: Specify the path of the config file.
Testing
- To test a model, run the following script in your console.
python test.py --config path/to/config.yaml
--config
: Specify the path of the config file.
Inference
- We provide the script in
inference.py
to help you do inference using custom data. - To do inference, run the following script in your console.
python inference.py --bin path/to/model.bin --image_folder path/to/image_folder --device $DEVICE --image_size $IMAGE_SIZE
--bin
: Specify the path of the model bin generated by the training script of this project.--image_folder
: Specify the directory of custom facial images. The script accepts images end with.jpg
or.png
.--device
: Specify the device to run the experiment, e.g.,cpu
,cuda:0
.--image_size
: Specify the spatial size of input images.- The program will output the fake probability for each input image like this:
path: path/to/image1.jpg | fake probability: 0.1296 | prediction: real path: path/to/image2.jpg | fake probability: 0.9146 | prediction: fake
- Type
python inference.py -h
in your console for more information about available arguments.
Acknowledgement
- We thank Qiqi Gu for helping plot the schematic diagram of the proposed method in the manuscript.