Home

Awesome

DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection (NeurIPS 2023 D&B)

License: CC BY-NC 4.0 Release .10 PyTorch Python

<b> Authors: <a href='https://yzy-stack.github.io/'>Zhiyuan Yan</a>, <a href='https://yzhang2016.github.io/'>Yong Zhang</a>, Xinhang Yuan, <a href='https://cse.buffalo.edu/~siweilyu/'>Siwei Lyu</a>, <a href='https://sites.google.com/site/baoyuanwu2015/'>Baoyuan Wu* </a> </b>

[paper] [pre-trained weights]

❗️❗️❗️ News:

  1. NEW DATASET: We are excited to introduce our brand-new deepfake dataset called DF40, comprising 40 distinct deepfake techniques, even the just released SoTAs. DF40 is designed for seamless integration into the workflow of DeepfakeBench, allowing you to train or test just as you would with other datasets like Celeb-DF and FF++. Please refer to DF40 dataset for details.

  2. ** OUR LATEST WORK **: Our latest research paper has been released at ArXiv. We propose a highly generalizable and efficient detection model that can be used to detect both face deepfake images and synthetic images (not limited to face). We will soon release all the codes implemented by the DeepfakeBench codebase.

  3. We implement two recent SoTA video detectors: AltFreezing (CVPR 2023) and TALL (ICCV 2023) on our benchmark. We release their pre-trained weights on FF++ via Google Drive AltFreezing and TALL. You can also use our codebase to retrain this model from scratch. The training and evaluation processes will be the same as those of other detectors on DeepfakeBench.

  4. The pre-trained weights of 3D R50 for training I3D, FTCN, and AltFreezing are here


<div align="center"> </div> <div style="text-align:center;"> <img src="figures/archi.png" style="max-width:60%;"> </div>

Welcome to DeepfakeBench, your one-stop solution for deepfake detection! Here are some key features of our platform:

Unified Platform: DeepfakeBench presents the first comprehensive benchmark for deepfake detection, resolving the issue of lack of standardization and uniformity in this field.

Data Management: DeepfakeBench provides a unified data management system that ensures consistent input across all detection models.

Integrated Framework: DeepfakeBench offers an integrated framework for the implementation of state-of-the-art detection methods.

Standardized Evaluations: DeepfakeBench introduces standardized evaluation metrics and protocols to enhance the transparency and reproducibility of performance evaluations.

Extensive Analysis and Insights: DeepfakeBench facilitates an extensive analysis from various perspectives, providing new insights to inspire the development of new technologies.


😊 DeepfakeBench-v2 Updates:

  1. 34 Detectors are supported: DeepfakeBench, currently, supports a total of 35 detection methods (27 image detectors + 8 video detectors).

  2. More SoTA detectors are added: We have implemented more SoTA and latest detectors, including: LSDA (CVPR'24), AltFreezing (CVPR'23), TALL (ICCV'23), IID (CVPR'23), SBI (CVPR'22), SLADD (CVPR'22), FTCN (ICCV'21), etc.

  3. Data Preprocessing: DeepfakeBench currently provides LMDB for more faster and effective IO.

  4. Multi-GPUs Training: DeepfakeBench offers DDP for multiple GPUs training.

  5. Integrated Framework: DeepfakeBench offers an integrated framework, including training, data loading, and evaluation at both the image and video levels.

  6. More Evaluation Metrics: DeepfakeBench facilitates a more comprehensive evaluation by including the following metrics: frame-level AUC, video-level AUC, ACC (fake and real), EER, PR, and AP.


<font size=4><b> Table of Contents </b></font>


📚 Features

<a href="#top">[Back to top]</a>

DeepfakeBench has the following features:

⭐️ Detectors (34 detectors):

The table below highlights the update new detectors compared to our original DeepfakeBench version.

File namePaper
AltFreezingaltfreezing_detector.pyAltFreezing for More General Video Face Forgery Detection CVPR 2023
TALLtall_detector.pyTALL: Thumbnail Layout for Deepfake Video Detection ICCV 2023
LSDAlsda_detector.pyTranscending forgery specificity with latent space augmentation for generalizable deepfake detection CVPR 2024
IIDiid_detector.pyImplicit Identity Driven Deepfake Face Swapping Detection CVPR 2023
SBIsbi_detector.pyDetecting Deepfakes with Self-Blended Images CVPR 2022
SLADDsladd_detector.pySelf-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection CVPR 2022
FTCNftcn_detector.pyExploring Temporal Coherence for More General Video Face Forgery Detection ICCV 2021
PCL-I2Gpcl_xception_detector.pyLearning Self-Consistency for Deepfake Detection ICCV 2021
Local-relationlrl_detector.pyLocal Relation Learning for Face Forgery Detection AAAI 2021
UIA-ViTuia_vit_detector.pyUIA-ViT: Unsupervised Inconsistency-Aware Method based on Vision Transformer for Face Forgery Detection ECCV 2022
SIAsia_detector.pyAn Information Theoretic Approach for Attention-Driven Face Forgery Detection ECCV 2022
Multi-attentionmulti_attention_detector.pyMulti-Attentional Deepfake Detection CVPR 2021
CLIPclip_detector.pyLearning Transferable Visual Models From Natural Language Supervision ICML 2021
STILstil_detector.pySpatiotemporal Inconsistency Learning for DeepFake Video Detection ACMMM 2021
RFMrfm_detector.pyRepresentative Forgery Mining for Fake Face Detection CVPR 2021
TimeTransformertimetransformer_detector.pyIs space-time attention all you need for video understanding? ICML 2021
VideoMAEvideomae_detector.pyVideomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training NIPS 2022
X-CLIPxclip_detector.pyExpanding Language-Image Pretrained Models for General Video Recognition ECCV 2022

⭐️ Datasets (9 datasets): FaceForensics++, FaceShifter, DeepfakeDetection, Deepfake Detection Challenge (Preview), Deepfake Detection Challenge, Celeb-DF-v1, Celeb-DF-v2, DeepForensics-1.0, UADFV

DeepfakeBench will be continuously updated to track the latest advances in deepfake detection. The implementation of more detection methods, as well as their evaluations, are on the way. You are welcome to contribute your detection methods to DeepfakeBench.

⏳ Quick Start

1. Installation

(option 1) You can run the following script to configure the necessary environment:

git clone git@github.com:SCLBD/DeepfakeBench.git
cd DeepfakeBench
conda create -n DeepfakeBench python=3.7.2
conda activate DeepfakeBench
sh install.sh

(option 2) You can also utilize the supplied Dockerfile to set up the entire environment using Docker. This will allow you to execute all the codes in the benchmark without encountering any environment-related problems. Simply run the following commands to enter the Docker environment.

docker build -t DeepfakeBench .
docker run --gpus all -itd -v /path/to/this/repository:/app/ --shm-size 64G DeepfakeBench

Note we used Docker version 19.03.14 in our setup. We highly recommend using this version for consistency, but later versions of Docker may also be compatible.

2. Download Data

<a href="#top">[Back to top]</a>

All datasets used in DeepfakeBench can be downloaded from their own websites or repositories and preprocessed accordingly. For convenience, we also provide the data we use in our research, including:

TypesLinksNotes
Rgb-format DatasetsPassword: ogjnPreprocessed data
Lmdb-format DatasetsPassword: g3gjLMDB database for each dataset
Json ConfigurationsPassword: dcwvData arrangement

All the downloaded datasets are already preprocessed to cropped faces (32 frames per video) with their masks and landmarks, which can be directly deployed to evaluate our benchmark.

The provided datasets are:

Dataset NameNotes
Celeb-DF-v1-
Celeb-DF-v2-
FaceForensics++, DeepfakeDetection, FaceShifterOnly c23
UADFV-
Deepfake Detection Challenge (Preview)-
Deepfake Detection ChallengeOnly Test Data

🛡️ Copyright of the above datasets belongs to their original providers.

Other detailed information about the datasets used in DeepfakeBench is summarized below:

DatasetReal VideosFake VideosTotal VideosRights ClearedTotal SubjectsSynthesis MethodsPerturbationsOriginal Repository
FaceForensics++100040005000NON/A42Hyper-link
FaceShifter100010002000NON/A1-Hyper-link
DeepfakeDetection36330003363YES285-Hyper-link
Deepfake Detection Challenge (Preview)113141195250YES6623Hyper-link
Deepfake Detection Challenge23654104500128154YES960819Hyper-link
CelebDF-v14087951203NON/A1-Hyper-link
CelebDF-v259056396229NO591-Hyper-link
DeepForensics-1.0500001000060000YES10017Hyper-link
UADFV494998NO491-Hyper-link

Upon downloading the datasets, please ensure to store them in the ./datasets folder, arranging them in accordance with the directory structure outlined below:

datasets
├── lmdb
|   ├── FaceForensics++_lmdb
|   |   ├── data.mdb
|   |   ├── lock.mdb
├── rgb
|   ├── FaceForensics++
|   │   ├── original_sequences
|   │   │   ├── youtube
|   │   │   │   ├── c23
|   │   │   │   │   ├── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   │   └── landmarks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   └── c40
|   │   │   │   │   ├── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   │   └── landmarks (if you download my processed data)
|   │   │   │   │       └── *.npy
|   │   │   ├── actors
|   │   │   │   ├── c23
|   │   │   │   │   ├── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   │   └── landmarks (if you download my processed data)
|   │   │   │   │       └── *.npy
|   │   │   │   └── c40
|   │   │   │   │   ├── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   │   └── landmarks (if you download my processed data)
|   │   │   │   │       └── *.npy
|   │   ├── manipulated_sequences
|   │   │   ├── Deepfakes
|   │   │   │   ├── c23
|   │   │   │   │   └── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   │   │   │   │   └── landmarks (if you download my processed data)
|   │   │   │   │       └── *.npy
|   │   │   │   └── c40
|   │   │   │   │   ├── videos
|   │   │   │   │   │   └── *.mp4
|   │   │   │   │   └── frames (if you download my processed data)
|   │   │   │   │   │   └── *.png
|   |   |   |   |   └── masks (if you download my processed data)
│   │   │   │   |   │   └── *.png
│   │   │   │   |   └── landmarks (if you download my processed data)
│   │   │   |   │       └── *.npy
│   │   |   ├── Face2Face
│   |   │   │   ├── ...
|   │   │   ├── FaceSwap
|   │   │   │   ├── ...
|   │   │   ├── NeuralTextures
|   │   │   │   ├── ...
|   │   │   ├── FaceShifter
|   │   │   │   ├── ...
|   │   │   └── DeepFakeDetection
|   │   │       ├── ...
Other datasets are similar to the above structure

If you choose to store your datasets in a different folder, you may specified the rgb_dir or lmdb_dir in training\test_config.yaml and training\train_config.yaml.

The downloaded json configurations should be arranged as:

preprocessing
├── dataset_json
|   ├── FaceForensics++.json

You may also store your configurations in a different folder by specifying the dataset_json_folder in training\test_config.yaml and training\train_config.yaml.

3. Preprocessing (optional)

<a href="#top">[Back to top]</a>

❗️Note: If you want to directly utilize the data, including frames, landmarks, masks, and more, that I have provided above, you can skip the pre-processing step. However, you still need to run the rearrangement script to generate the JSON file for each dataset for the unified data loading in the training and testing process.

DeepfakeBench follows a sequential workflow for face detection, alignment, and cropping. The processed data, including face images, landmarks, and masks, are saved in separate folders for further analysis.

To start preprocessing your dataset, please follow these steps:

  1. Download the shape_predictor_81_face_landmarks.dat file. Then, copy the downloaded shape_predictor_81_face_landmarks.dat file into the ./preprocessing/dlib_tools folder. This file is necessary for Dlib's face detection functionality.

  2. Open the ./preprocessing/config.yaml and locate the line default: DATASET_YOU_SPECIFY. Replace DATASET_YOU_SPECIFY with the name of the dataset you want to preprocess, such as FaceForensics++.

  3. Specify the dataset_root_path in the config.yaml file. Search for the line that mentions dataset_root_path. By default, it looks like this: dataset_root_path: ./datasets. Replace ./datasets with the actual path to the folder where your dataset is arranged.

Once you have completed these steps, you can proceed with running the following line to do the preprocessing:

cd preprocessing

python preprocess.py

You may skip the preprocessing step by downloading the provided data.

4. Rearrangement

To simplify the handling of different datasets, we propose a unified and convenient way to load them. The function eliminates the need to write separate input/output (I/O) code for each dataset, reducing duplication of effort and easing data management.

After the preprocessing above, you will obtain the processed data (i.e., frames, landmarks, and masks) for each dataset you specify. Similarly, you need to set the parameters in ./preprocessing/config.yaml for each dataset. After that, run the following line:

cd preprocessing

python rearrange.py

After running the above line, you will obtain the JSON files for each dataset in the ./preprocessing/dataset_json folder. The rearranged structure organizes the data in a hierarchical manner, grouping videos based on their labels and data splits (i.e., train, test, validation). Each video is represented as a dictionary entry containing relevant metadata, including file paths, labels, compression levels (if applicable), etc.

5. Training (optional)

<a href="#top">[Back to top]</a>

To run the training code, you should first download the pretrained weights for the corresponding backbones (These pre-trained weights are from ImageNet). You can download them from Link. After downloading, you need to put all the weights files into the folder ./training/pretrained.

Then, you should go to the ./training/config/detector/ folder and then Choose the detector to be trained. For instance, you can adjust the parameters in xception.yaml to specify the parameters, e.g., training and testing datasets, epoch, frame_num, etc.

After setting the parameters, you can run with the following to train the Xception detector:

python training/train.py \
--detector_path ./training/config/detector/xception.yaml

You can also adjust the training and testing datasets using the command line, for example:

python training/train.py \
--detector_path ./training/config/detector/xception.yaml  \
--train_dataset "FaceForensics++" \
--test_dataset "Celeb-DF-v1" "Celeb-DF-v2"

By default, the checkpoints and features will be saved during the training process. If you do not want to save them, run with the following:

python training/train.py \
--detector_path ./training/config/detector/xception.yaml \
--train_dataset "FaceForensics++" \
--test_dataset "Celeb-DF-v1" "Celeb-DF-v2" \
--no-save_ckpt \
--no-save_feat

For multi-gpus training (DDP), please refer to train.sh file for details.

To train other detectors using the code mentioned above, you can specify the config file accordingly. However, for the Face X-ray detector, an additional step is required before training. To save training time, a pickle file is generated to store the Top-N nearest images for each given image. To generate this file, you should run the generate_xray_nearest.py file. Once the pickle file is created, you can train the Face X-ray detector using the same way above. If you want to check/use the files I have already generated, please refer to the link.

6. Evaluation

If you only want to evaluate the detectors to produce the results of the cross-dataset evaluation, you can use the the test.py code for evaluation. Here is an example:

python3 training/test.py \
--detector_path ./training/config/detector/xception.yaml \
--test_dataset "Celeb-DF-v1" "Celeb-DF-v2" "DFDCP" \
--weights_path ./training/weights/xception_best.pth

Note that we have provided the pre-trained weights for each detector (you can download them from the link). Make sure to put these weights in the ./training/weights folder.

🏆 Results

<a href="#top">[Back to top]</a>

❗️❗️❗️ DeepfakeBench-v2 Updates:

The below results are cited from our paper. We have conducted more comprehensive evaluations using the DeepfakeBench-v2, with more datasets used and more detectors implemented. We will update the below table soon.

In our Benchmark, we apply TensorBoard to monitor the progress of training models. It provides a visual representation of the training process, allowing users to examine training results conveniently.

To demonstrate the effectiveness of different detectors, we present partial results from both within-domain and cross-domain evaluations. The evaluation metric used is the frame-level Area Under the Curve (AUC). In this particular scenario, we train the detectors on the FF++ (c23) dataset and assess their performance on other datasets.

For a comprehensive overview of the results, we strongly recommend referring to our paper. These resources provide a detailed analysis of the training outcomes and offer a deeper understanding of the methodology and findings.

TypeDetectorBackboneFF++_c23FF++_c40FF-DFFF-F2FFF-FSFF-NTAvg.Top3CDFv1CDFv2DF-1.0DFDDFDCDFDCPFshUADFVAvg.Top3
NaiveMeso4MesoNet0.60770.59200.67710.61700.59460.57010.609700.73580.60910.91130.54810.55600.59940.56600.71500.65511
NaiveMesoIncepMesoNet0.75830.72780.85420.80870.74210.65170.757100.73660.69660.92330.60690.62260.75610.64380.90490.73643
NaiveCNN-AugResNet0.84930.78460.90480.87880.90260.73130.841900.74200.70270.79930.64640.63610.61700.59850.87390.70200
NaiveXceptionXception0.96370.82610.97990.97850.98330.93850.945040.77940.73650.83410.81630.70770.73740.62490.93790.77182
NaiveEfficientB4Efficient0.95670.81500.97570.97580.97970.93080.938900.79090.74870.83300.81480.69550.72830.61620.94720.77183
SpatialCapsuleCapsule0.84210.70400.86690.86340.87340.78040.821700.79090.74720.91070.68410.64650.65680.64650.90780.74882
SpatialFWAXception0.87650.73570.92100.90000.88430.81200.854900.78970.66800.93340.74030.61320.63750.55510.85390.72391
SpatialFace X-rayHRNet0.95920.79250.97940.98720.98710.92900.939130.70930.67860.55310.76550.63260.69420.65530.89890.69850
SpatialFFDXception0.96240.82370.98030.97840.98530.93060.943410.78400.74350.86090.80240.70290.74260.60560.94500.77331
SpatialCOREXception0.96380.81940.97870.98030.98230.93390.943120.77980.74280.84750.80180.70490.73410.60320.94120.76940
SpatialRecceDesigned0.96210.81900.97970.97790.97850.93570.942210.76770.73190.79850.81190.71330.74190.60950.94460.76492
SpatialUCFXception0.97050.83990.98830.98400.98960.94410.952760.77930.75270.82410.80740.71910.75940.64620.95280.78015
FrequencyF3NetXception0.96350.82710.97930.97960.98440.93540.944910.77690.73520.84310.79750.70210.73540.59140.93470.76450
FrequencySPSLXception0.96100.81740.97810.97540.98290.92990.940800.81500.76500.87670.81220.70400.74080.64370.94240.78753
FrequencySRMXception0.95760.81140.97330.96960.97400.92950.935900.79260.75520.86380.81200.69950.74080.60140.94270.77602

In the above table, "Avg." donates the average AUC for within-domain and cross-domain evaluation, and the overall results. "Top3" represents the count of each method ranks within the top-3 across all testing datasets. The best-performing method for each column is highlighted.

Also, we provide all experimental results in Link (code: qjpd). You can use these results for further analysis using the code in ./analysis folder.

📝 Citation

<a href="#top">[Back to top]</a>

If you find our benchmark useful to your research, please cite it as follows:

@inproceedings{DeepfakeBench_YAN_NEURIPS2023,
 author = {Yan, Zhiyuan and Zhang, Yong and Yuan, Xinhang and Lyu, Siwei and Wu, Baoyuan},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
 pages = {4534--4565},
 publisher = {Curran Associates, Inc.},
 title = {DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection},
 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/0e735e4b4f07de483cbe250130992726-Paper-Datasets_and_Benchmarks.pdf},
 volume = {36},
 year = {2023}
}

If interested, you can read our recent works about deepfake detection, and more works about trustworthy AI can be found here.

@inproceedings{UCF_YAN_ICCV2023,
 title={Ucf: Uncovering common features for generalizable deepfake detection},
 author={Yan, Zhiyuan and Zhang, Yong and Fan, Yanbo and Wu, Baoyuan},
 booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
 pages={22412--22423},
 year={2023}
}

@inproceedings{LSDA_YAN_CVPR2024,
  title={Transcending forgery specificity with latent space augmentation for generalizable deepfake detection},
  author={Yan, Zhiyuan and Luo, Yuhao and Lyu, Siwei and Liu, Qingshan and Wu, Baoyuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@inproceedings{cheng2024can,
  title={Can We Leave Deepfake Data Behind in Training Deepfake Detector?},
  author={Cheng, Jikang and Yan, Zhiyuan and Zhang, Ying and Luo, Yuhao and Wang, Zhongyuan and Li, Chen},
  booktitle={Advances in Neural Information Processing Systems},
  year={2024}
}

@article{yan2024effort,
  title={Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection},
  author={Yan, Zhiyuan and Wang, Jiangming and Wang, Zhendong and Jin, Peng and Zhang, Ke-Yue and Chen, Shen and Yao, Taiping and Ding, Shouhong and Wu, Baoyuan and Yuan, Li},
  journal={arXiv preprint arXiv:2411.15633},
  year={2024}
}

@article{chen2024textit,
  title={$$\backslash$textit $\{$X$\}$\^{} 2$-DFD: A framework for e $$\{$X$\}$ $ plainable and e $$\{$X$\}$ $ tendable Deepfake Detection},
  author={Chen, Yize and Yan, Zhiyuan and Lyu, Siwei and Wu, Baoyuan},
  journal={arXiv preprint arXiv:2410.06126},
  year={2024}
}

@article{cheng2024stacking,
  title={Stacking Brick by Brick: Aligned Feature Isolation for Incremental Face Forgery Detection},
  author={Cheng, Jikang and Yan, Zhiyuan and Zhang, Ying and Hao, Li and Ai, Jiaxin and Zou, Qin and Li, Chen and Wang, Zhongyuan},
  journal={arXiv preprint arXiv:2411.11396},
  year={2024}
}

🛡️ License

<a href="#top">[Back to top]</a>

This repository is licensed by The Chinese University of Hong Kong, Shenzhen under Creative Commons Attribution-NonCommercial 4.0 International Public License (identified as CC BY-NC-4.0 in SPDX). More details about the license could be found in LICENSE.

This project is built by the Secure Computing Lab of Big Data (SCLBD) at The School of Data Science (SDS) of The Chinese University of Hong Kong, Shenzhen, directed by Professor Baoyuan Wu. SCLBD focuses on the research of trustworthy AI, including backdoor learning, adversarial examples, federated learning, fairness, etc.

If you have any suggestions, comments, or wish to contribute code or propose methods, we warmly welcome your input. Please contact us at wubaoyuan@cuhk.edu.cn or yanzhiyuan1114@gmail.com. We look forward to collaborating with you in pushing the boundaries of deepfake detection.