Home

Awesome

DF40: Toward Next-Generation Deepfake Detection (Project Page; Paper; Download DF40; Checkpoints)

License: CC BY-NC 4.0 Release .10 PyTorch Python

šŸŽ‰šŸŽ‰šŸŽ‰ Our DF40 has been accepted by NeurIPS 2024 D&B track!

Welcome to our work DF40, for next-generation deepfake detection.

In this work, we propose: (1) a diverse deepfake dataset with 40 distinct generations methods; and (2) a comprehensive benchmark for training, evaluation, and analysis.

"Expanding Your Evaluation with 40 distinct High-Quality Fake Data from the FF++ and CDF domains!!"

DF40 Dataset Highlight: The key features of our proposed DF40 dataset are as follows

āœ… Forgery Diversity: DF40 comprises 40 distinct deepfake techniques (both representive and SOTA methods are included), facilitating the detection of nowadays' SOTA deepfakes and AIGCs. We provide 10 face-swapping methods, 13 face-reenactment methods, 12 entire face synthesis methods, and 5 face editing.

āœ… Forgery Realism: DF40 includes realistic deepfake data created by highly popular generation software and methods, e.g., HeyGen, MidJourney, DeepFaceLab, to simulate real-world deepfakes. We even include the just-released DiT, SiT, PixArt-$\alpha$, etc.

āœ… Forgery Scale: DF40 offers million-level deepfake data scale for both images and videos.

āœ… Data Alignment: DF40 provides alignment between fake methods and data domains. Most methods (31) are generated under the FF++ and CDF domains. Using our fake data, you can further expand your evaluation (training on FF++ and testing on CDF).

The figure below provides a brief introduction to our DF40 dataset.

<div align="center"> </div> <div style="text-align:center;"> <img src="df40_figs/df40_intro.jpg" style="max-width:60%;"> </div>

The following table displays the statistical description and illustrates the details of our DF40 dataset. Please check our paper for details.

<div align="center"> </div> <div style="text-align:center;"> <img src="df40_figs/table1.jpg" style="max-width:60%;"> </div>

šŸ’„ DF40 Dataset

TypeID-NumberGeneration MethodOriginal Data SourceVisual Examples
Face-swapping (FS)1FSGANFF++ and Celeb-DFfsgan-Example
Face-swapping (FS)2FaceSwapFF++ and Celeb-DFfaceswap-Example
Face-swapping (FS)3SimSwapFF++ and Celeb-DFsimswap-Example
Face-swapping (FS)4InSwapperFF++ and Celeb-DFinswap-Example
Face-swapping (FS)5BlendFaceFF++ and Celeb-DFblendface-Example
Face-swapping (FS)6UniFaceFF++ and Celeb-DFuniface-Example
Face-swapping (FS)7MobileSwapFF++ and Celeb-DFmobileswap-Example
Face-swapping (FS)8e4sFF++ and Celeb-DFe4s-Example
Face-swapping (FS)9FaceDancerFF++ and Celeb-DFfacedancer-Example
Face-swapping (FS)10DeepFaceLabUADFVdeepfacelab-Example
Face-reenactment (FR)11FOMMFF++ and Celeb-DFfomm-Example
Face-reenactment (FR)12FS_vid2vidFF++ and Celeb-DFface_vid2vid-Example
Face-reenactment (FR)13Wav2LipFF++ and Celeb-DFwav2lip-Example
Face-reenactment (FR)14MRAAFF++ and Celeb-DFmraa-Example
Face-reenactment (FR)15OneShotFF++ and Celeb-DFoneshot-Example
Face-reenactment (FR)16PIRenderFF++ and Celeb-DFpirender-Example
Face-reenactment (FR)17TPSMFF++ and Celeb-DFtpsm-Example
Face-reenactment (FR)18LIAFF++ and Celeb-DFlia-Example
Face-reenactment (FR)19DaGANFF++ and Celeb-DFdagan-Example
Face-reenactment (FR)20SadTalkerFF++ and Celeb-DFsadtalker-Example
Face-reenactment (FR)21MCNetFF++ and Celeb-DFmcnet-Example
Face-reenactment (FR)22HyperReenactFF++ and Celeb-DFhyperreenact-Example
Face-reenactment (FR)23HeyGenFVHQheygen-Example
Entire Face Synthesis (EFS)24VQGANFinetuning on FF++ and Celeb-DFvqgan-Example
Entire Face Synthesis (EFS)25StyleGAN2Finetuning on FF++ and Celeb-DFstylegan2-Example
Entire Face Synthesis (EFS)26StyleGAN3Finetuning on FF++ and Celeb-DFstylegan3-Example
Entire Face Synthesis (EFS)27StyleGAN-XLFinetuning on FF++ and Celeb-DFstyleganxl-Example
Entire Face Synthesis (EFS)28SD-2.1Finetuning on FF++ and Celeb-DFsd2.1-Example
Entire Face Synthesis (EFS)29DDPMFinetuning on FF++ and Celeb-DFddpm-Example
Entire Face Synthesis (EFS)30RDDMFinetuning on FF++ and Celeb-DFrddm-Example
Entire Face Synthesis (EFS)31PixArt-$\alpha$Finetuning on FF++ and Celeb-DFpixart-Example
Entire Face Synthesis (EFS)32DiT-XL/2Finetuning on FF++ and Celeb-DFdit-Example
Entire Face Synthesis (EFS)33SiT-XL/2Finetuning on FF++ and Celeb-DFsit-Example
Entire Face Synthesis (EFS)34MidJounery6FFHQmj-Example
Entire Face Synthesis (EFS)35WhichisRealFFHQvqgan-Example
Face Edit (FE)36CollabDiffCelebAcollabdiff-Example
Face Edit (FE)37e4eCelebAe4e-Example
Face Edit (FE)38StarGANCelebAstargan-Example
Face Edit (FE)39StarGANv2CelebAstarganv2-Example
Face Edit (FE)40StyleCLIPCelebAstyleclip-Example

ā³ Quick Start

<a href="#top">[Back to top]</a>

1. Installation

Please run the following script to install the required libraries:

sh install.sh

2. Download ckpts for inference

All checkpoints/weights of ten models training on our DF40 are released at Google Drive and Baidu Disk.

Note that:

3. Download DF40 data (after pre-processing)

For quick use and convenience, we provide all DF40 data after pre-processing using in our research. You do NOT need to do the pre-processing again but directly use our processed data.

3. Run inference

You can then run inference using the trained weights used in our research.

Example-1: If you want to use the Xception model trained on SimSwap (FF) and test it on BlendFace (FF), run the following line.

cd DeepfakeBench_DF40

python training/test.py \
--detector_path training/config/detector/xception.yaml \
--weights_path training/df40_weights/train_on_fs_matrix/simswap_ff.pth  \
--test_dataset blendface_ff

Example-2: If you want to use the Xception model trained on SimSwap (FF) and test it on SimSwap (CDF), run the following line.

cd DeepfakeBench_DF40

python training/test.py \
--detector_path training/config/detector/xception.yaml \
--weights_path training/df40_weights/train_on_fs_matrix/simswap_ff.pth  \
--test_dataset simswap_cdf

Example-3: If you want to use the CLIP model trained on all methods of FS (FF) and test it on DeepFaceLab, run the following line.

cd DeepfakeBench_DF40

python training/test.py \
--detector_path training/config/detector/clip.yaml \
--weights_path training/df40_weights/train_on_fs/clip.pth  \
--test_dataset deepfacelab

šŸ’» Reproduction and Development

<a href="#top">[Back to top]</a>

1. Download DF40 dataset

We provide two ways to download our dataset:

2. Preprocessing (optional)

If you only want to use the processed data we provided, you can skip this step. Otherwise, you need to use the following codes for doing data preprocessing.

To start preprocessing DF40 dataset, please follow these steps:

  1. Open the ./preprocessing/config.yaml and locate the line default: DATASET_YOU_SPECIFY. Replace DATASET_YOU_SPECIFY with the name of the dataset you want to preprocess, such as FaceForensics++.

  2. Specify the dataset_root_path in the config.yaml file. Search for the line that mentions dataset_root_path. By default, it looks like this: dataset_root_path: ./datasets. Replace ./datasets with the actual path to the folder where your dataset is arranged.

Once you have completed these steps, you can proceed with running the following line to do the preprocessing:

cd preprocessing

python preprocess.py

3. Rearrangement (optional)

"Rearrangment" here means that we need to create a JSON file for each dataset for collecting all frames within different folders.

If you only want to use the processed data we provided, you can skip this step and use the JSON files we used in our research (Google Drive). Otherwise, you need to use the following codes for doing data rearrangement.

After the preprocessing above, you will obtain the processed data (e.g., frames, landmarks, and masks) for each dataset you specify. Similarly, you need to set the parameters in ./preprocessing/config.yaml for each dataset. After that, run the following line:

cd preprocessing

python rearrange.py

After running the above line, you will obtain the JSON files for each dataset in the ./preprocessing/dataset_json folder. The rearranged structure organizes the data in a hierarchical manner, grouping videos based on their labels and data splits (i.e., train, test, validation). Each video is represented as a dictionary entry containing relevant metadata, including file paths, labels, compression levels (if applicable), etc.

4. Training

Our benchmark includes four standarad protocols. You can use the following examples of each protocol to train the models:

(a). Protocol-1: Same Data Domain, Differenet Forgery Types

First, you can run the following lines to train a model (e.g., if you want to train the Xception model on all FS methods):

python3 -m torch.distributed.launch --nproc_per_node=8 training/train.py \
--detector_path ./training/config/detector/xception.yaml \
--train_dataset FSAll_ff \
--test_dataset FSAll_ff \
--ddp
python3 training/train.py \
--detector_path ./training/config/detector/xception.yaml \
--train_dataset FSAll_ff \
--test_dataset FSAll_ff \

Note, we here perform both training and evaluating on FSAll_ff (using all testing FS methods of FF domain as the evaluation set) to select the best checkpoint. Once finished training, you can use the best checkpoint to evaluate other testing datasets (e.g., all testing EFS and FR methods of the FF domain). Specifically:

python3 training/test.py \
--detector_path ./training/config/detector/xception.yaml \
--test_dataset "FSAll_ff" "FRAll_ff" "EFSAll_ff" \
--weights_path ./training/df40_weights/train_on_fs/xception.pth

Then, you can obtain similar evaluation results reported in Tab. 3 of the manuscript.

(b). Protocol-2: Same Forgery Types, Differenet Data Domain Similarly, you can run the following lines for Protocol-2.

python3 training/test.py \
--detector_path ./training/config/detector/xception.yaml \
--test_dataset "FSAll_cdf" "FRAll_cdf" "EFSAll_cdf" \
--weights_path ./training/df40_weights/train_on_fs/xception.pth

Then, you can obtain similar evaluation results reported in Tab. 4 of the manuscript.

(c). Protocol-3: Differenet Forgery Types, Differenet Data Domain Similarly, you can run the following lines for Protocol-3.

python3 training/test.py \
--detector_path ./training/config/detector/xception.yaml \
--test_dataset "deepfacelab" "heygen" "whichisreal" "MidJourney" "stargan" "starganv2" "styleclip" "e4e" "CollabDiff" \
--weights_path ./training/df40_weights/train_on_fs/xception.pth

Then, you can obtain all evaluation results reported in Tab. 5 of the manuscript.

(c). Protocol-4: Train on one fake method and testing on all other methods (One-vs-All) Similarly, you should first train one model (e.g., Xception) on one specific fake method (e.g., SimSwap):

python3 training/train.py \
--detector_path ./training/config/detector/xception.yaml \
--train_dataset simswap_ff \
--test_dataset simswap_ff \

Then runing the following lines for evaluation:

python3 training/test.py \
--detector_path ./training/config/detector/xception.yaml \
--test_dataset ... (type them one-by-one) \
--weights_path ./training/df40_weights/train_on_fs_matrix/simswap_ff.pth

You can also directly use the bash file (./training/test_df40.sh) for convenience and then you do not need to type all fake methods one-by-one at the terminal.

Then, you can obtain all evaluation results reported in Fig. 4 of the manuscript.

šŸ‘€ More visual examples

<a href="#top">[Back to top]</a>

  1. Example samples created by FS (face-swapping) methods: Please check here.

  2. Example samples created by FR (face-reenactment) methods: Please check here.

  3. Example samples created by EFS (entire face synthesis) methods: Please check here.

  4. Example samples created by FE (face editing) methods: Please check here.

Folder Structure

deepfake_detection_datasets
ā”‚
ā”œā”€ā”€ DF40
ā”‚   ā”œā”€ā”€ fsgan
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ faceswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ simswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ inswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ blendface
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ uniface
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ mobileswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ e4s
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ facedancer
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ fomm
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ facevid2vid
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ wav2lip
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ MRAA
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ one_shot_free
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ pirender
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ tpsm
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ lia
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ danet
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ sadtalker
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ mcnet
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ heygen
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ VQGAN
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGAN2
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGAN3
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGANXL
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ sd2.1
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ ddim
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ PixArt
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ DiT
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ SiT
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ MidJourney
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ whichfaceisreal
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ stargan
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ starganv2
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ styleclip
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ e4e
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā””ā”€ā”€ CollabDiff
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚  
ā”œā”€ā”€ DF40_train
ā”‚   ā”œā”€ā”€ fsgan
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ faceswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ simswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ inswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ blendface
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ uniface
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ mobileswap
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ e4s
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ facedancer
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ fomm
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ facevid2vid
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ wav2lip
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ MRAA
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ one_shot_free
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ pirender
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ tpsm
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ lia
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ danet
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ sadtalker
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ mcnet
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ heygen
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ VQGAN
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGAN2
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGAN3
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ StyleGANXL
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ sd2.1
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ ddim
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ PixArt
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ DiT
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ SiT
ā”‚       ā”œā”€ā”€ ff
ā”‚       ā””ā”€ā”€ cdf
ā”‚   ā”œā”€ā”€ MidJourney
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ whichfaceisreal
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ stargan
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ starganv2
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ styleclip
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā”œā”€ā”€ e4e
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real
ā”‚   ā””ā”€ā”€ CollabDiff
ā”‚       ā”œā”€ā”€ fake
ā”‚       ā””ā”€ā”€ real

Citations

If you use our DF40 dataset, checkpoints/weights, and codes in your research, you must cite DF40 as follows:

@article{yan2024df40,
  title={DF40: Toward Next-Generation Deepfake Detection},
  author={Yan, Zhiyuan and Yao, Taiping and Chen, Shen and Zhao, Yandan and Fu, Xinghe and Zhu, Junwei and Luo, Donghao and Yuan, Li and Wang, Chengjie and Ding, Shouhong and others},
  journal={arXiv preprint arXiv:2406.13495},
  year={2024}
}

Since our codebase is mainly based on DeepfakeBench, you should also cite it as follows:

@inproceedings{DeepfakeBench_YAN_NEURIPS2023,
 author = {Yan, Zhiyuan and Zhang, Yong and Yuan, Xinhang and Lyu, Siwei and Wu, Baoyuan},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
 pages = {4534--4565},
 publisher = {Curran Associates, Inc.},
 title = {DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection},
 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/0e735e4b4f07de483cbe250130992726-Paper-Datasets_and_Benchmarks.pdf},
 volume = {36},
 year = {2023}
}

License

The use of both the dataset and codes is RESTRICTED to Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0). See https://creativecommons.org/licenses/by-nc/4.0/ for details.