Home

Awesome

On the use of Stable Diffusion for creating realistic faces: from generation to detection

This Repo contains the code used to generate the fake Dataset proposed in the paper and the code used for the overall analysis. The other datasets cited and linked are NOT proposed by us and the credits go to the original creators.

Abstract

The mass adoption of diffusion models has shown that artificial intelligence (AI) systems can be used to easily generate realistic images. The spread of these technologies paves the way to previously unimaginable creative uses while also raising the possibility of malicious applications. In this work, we propose a critical analysis of the overall pipeline, i.e., from creating realistic human faces with Stable Diffusion v1.5 to recognizing fake ones. We first propose an analysis of the prompts that allow the generation of extremely realistic faces with a human-in-the-loop approach. Our objective is to identify the text prompts that drive the image generation process to obtain realistic photos that resemble everyday portraits captured with any camera. Next, we study how complex it is to recognize these fake contents for both AI-based models and non-expert humans. We conclude that similar to other deepfake creation techniques, despite some limitations in generalization across different datasets, it is possible to use AI to recognize these contents more accurately than non-expert humans would.

[Paper]

Dataset

Real (FFHQ)Stable Diffusion (ours)GANGAN2GAN3
alt textalt textalt textalt textalt text

The fake generated dataset that we propose is available at the drive folder: Stable Diffusion fakes. The dataset was created using the prompts inside the prompts.txt file. Each image's name is structured in this way:

<#prompt>-<#process>

where <#prompt> is the associated number of the prompt in the prompts.txt file, numbered from 0, and <#process> is the number representing the order in which the image was generated starting from 0 (each of these images is generated using a different seed). For example, the file named 0-0 is the first generated image from the first prompt in the prompts file, while the one named 248-70 is the 71-st generated image from the 249-th prompt in the file:

Generated ImagePrompt
......
alt textheadshot portrait of a nigerian man, real life, realistic background, 50mm, Facebook, Instagram, shot on iPhone, HD, HDR color, 4k, natural lighting, photography
......
alt textheadshot portrait of an old woman with braids blonde hair, real life, shot on iPhone, realistic background, HD, HDR color, 4k, natural lighting, photography, Facebook, Instagram, Pexels, Flickr, Unsplash, 50mm, 85mm, #wow, AMAZING, epic details, epic, beautiful face, fantastic, cinematic, dramatic lighting
......

However, new images can be generated using the code main.py. In order to generate images from different prompts then the prompts.txt file must be updated.

The other datasets used in this project for detection and classification purpose were taken from external resources. They are:

How to run the code

Fake Images Generation

To generate the images we used Stable Diffusion v1.5 from HuggingFace. The code (main.py) is ready to run, since the license of the model does not need to be explicitly accepted through the UI anymore.

Binary Classification

Fake Dataset

The fake dataset must be downloaded from the link stated above. It is split yet, but each zip must be extracted and put in the folders:

each one inside a subfolder named fake.

Real Dataset

The real dataset must be split, running the split_real_dataset.py file, in order to run the classifiers codes (The FFHQ dataset must be downloaded first from the link stated above, and then moved in the datasets folder, naming the subfolder containing all the images archive).

Data preprocessing

In order to balance the two datasets, we removed the children in the real one (because we avoided to generate children) using the code available here. The weights folder is empty. It have to be filled by the model downloadable from the websites cited in the file remove_children.py. The algorithm used it is not ours, so the credits go to the original authors.

Multi-class classification

To run the 5_classes_classifier one has to download the StyleGAN datasets from the link stated above, then move them in the different folders:

and run the split_5classes_dataset.py file.

Cross-validation

To run the cross_validation.py file one needs to use the models trained in binary_classifier.py. The needed checkpoints are available at the drive folder. The file will try to reach these models through the path ./<model_name>/<transformation_type>_<compression_type>_best.pth, so make sure to put them in the right folder.

Authors

Lorenzo Papa, Lorenzo Faiella, Luca Corvitto, Luca Maiano, Irene Amerini.

Cite us

L. Papa, L. Faiella, L. Corvitto, L. Maiano and I. Amerini, "On the use of Stable Diffusion for creating realistic faces: from generation to detection," 2023 11th International Workshop on Biometrics and Forensics (IWBF), Barcelona, Spain, 2023, pp. 1-6, doi: 10.1109/IWBF57495.2023.10156981.