Home

Awesome

<div align="center">

Arc2Face: A Foundation Model for ID-Consistent Human Faces

Foivos Paraperas Papantoniou<sup>1</sup>Alexandros Lattas<sup>1</sup>Stylianos Moschoglou<sup>1</sup>

Jiankang Deng<sup>1</sup>Bernhard Kainz<sup>1,2</sup>Stefanos Zafeiriou<sup>1</sup>

<sup>1</sup>Imperial College London, UK <br> <sup>2</sup>FAU Erlangen-Nürnberg, Germany

<a href='https://arc2face.github.io/'><img src='https://img.shields.io/badge/Project-Page-blue'></a> <a href='https://arxiv.org/abs/2403.11641'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a> <a href='https://huggingface.co/spaces/FoivosPar/Arc2Face'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-green'></a> <a href='https://huggingface.co/FoivosPar/Arc2Face'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-orange'></a> <a href='https://huggingface.co/datasets/FoivosPar/Arc2Face'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-8A2BE2'></a>

</div>

This is the official implementation of Arc2Face, an ID-conditioned face model:

 ✅ that generates high-quality images of any subject given only its ArcFace embedding, within a few seconds<br>  ✅ trained on the large-scale WebFace42M dataset offers superior ID similarity compared to existing models<br>  ✅ built on top of Stable Diffusion, can be extended to different input modalities, e.g. with ControlNet<br>

<img src='assets/teaser.gif'>

News/Updates

PWC

Installation

conda create -n arc2face python=3.10
conda activate arc2face

# Install requirements
pip install -r requirements.txt

Download Models

  1. The models can be downloaded manually from HuggingFace or using python:
from huggingface_hub import hf_hub_download

hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/diffusion_pytorch_model.safetensors", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/pytorch_model.bin", local_dir="./models")
  1. For face detection and ID-embedding extraction, manually download the antelopev2 package (direct link) and place the checkpoints under models/antelopev2.

  2. We use an ArcFace recognition model trained on WebFace42M. Download arcface.onnx from HuggingFace and put it in models/antelopev2 or using python:

hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arcface.onnx", local_dir="./models/antelopev2")
  1. Then delete glintr100.onnx (the default backbone from insightface).

The models folder structure should finally be:

  . ── models ──┌── antelopev2
                ├── arc2face
                └── encoder

Usage

Load pipeline using diffusers:

from diffusers import (
    StableDiffusionPipeline,
    UNet2DConditionModel,
    DPMSolverMultistepScheduler,
)

from arc2face import CLIPTextModelWrapper, project_face_embs

import torch
from insightface.app import FaceAnalysis
from PIL import Image
import numpy as np

# Arc2Face is built upon SD1.5
# The repo below can be used instead of the now deprecated 'runwayml/stable-diffusion-v1-5'
base_model = 'stable-diffusion-v1-5/stable-diffusion-v1-5'

encoder = CLIPTextModelWrapper.from_pretrained(
    'models', subfolder="encoder", torch_dtype=torch.float16
)

unet = UNet2DConditionModel.from_pretrained(
    'models', subfolder="arc2face", torch_dtype=torch.float16
)

pipeline = StableDiffusionPipeline.from_pretrained(
        base_model,
        text_encoder=encoder,
        unet=unet,
        torch_dtype=torch.float16,
        safety_checker=None
    )

You can use any SD-compatible schedulers and steps, just like with Stable Diffusion. By default, we use DPMSolverMultistepScheduler with 25 steps, which produces very good results in just a few seconds.

pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline = pipeline.to('cuda')

Pick an image and extract the ID-embedding:

app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))

img = np.array(Image.open('assets/examples/joacquin.png'))[:,:,::-1]

faces = app.get(img)
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1]  # select largest face (if more than one detected)
id_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()
id_emb = id_emb/torch.norm(id_emb, dim=1, keepdim=True)   # normalize embedding
id_emb = project_face_embs(pipeline, id_emb)    # pass through the encoder
<div align="center"> <img src='assets/examples/joacquin.png' style='width:25%;'> </div>

Generate images:

num_images = 4
images = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images
<div align="center"> <img src='assets/samples.jpg'> </div>

LCM-LoRA acceleration

LCM-LoRA allows you to reduce the sampling steps to as few as 2-4 for super-fast inference. Just plug in the pre-trained distillation adapter for SD v1.5 and switch to LCMScheduler:

from diffusers import LCMScheduler

pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)

Then, you can sample with as few as 2 steps (and disable guidance_scale by using a value of 1.0, as LCM is very sensitive to it and even small values lead to oversaturation):

images = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images

Note that this technique accelerates sampling in exchange for a slight drop in quality.

Start a local gradio demo

You can start a local demo for inference by running:

python gradio_demo/app.py

Arc2Face + ControlNet (pose)

<div align="center"> <img src='assets/controlnet.jpg'> </div>

We provide a ControlNet model trained on top of Arc2Face for pose control. We use EMOCA for 3D pose extraction. To run our demo, follow the steps below:

1) Download Model

Download the ControlNet checkpoint manually from HuggingFace or using python:

from huggingface_hub import hf_hub_download

hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")

2) Pull EMOCA

git submodule update --init external/emoca

3) Installation

This is the most tricky part. You will need PyTorch3D to run EMOCA. As its installation may cause conflicts, we suggest to follow the process below:

  1. Create a new environment and start by installing PyTorch3D with GPU support first (follow the official instructions).
  2. Add Arc2Face + EMOCA requirements with:
pip install -r requirements_controlnet.txt
  1. Install EMOCA code:
pip install -e external/emoca
  1. Finally, you need to download the EMOCA/FLAME assets. Run the following and follow the instructions in the terminal:
cd external/emoca/gdl_apps/EMOCA/demos 
bash download_assets.sh
cd ../../../../..

4) Start a local gradio demo

You can start a local ControlNet demo by running:

python gradio_demo/app_controlnet.py

Test Data

The test images used for comparisons in the paper (Synth-500, AgeDB) are available here. Please use them only for evaluation purposes and make sure to cite the corresponding sources when using them.

Community Resources

Replicate Demo

ComfyUI

Pinokio

Acknowledgements

Citation

If you find Arc2Face useful for your research, please consider citing us:

@inproceedings{paraperas2024arc2face,
      title={Arc2Face: A Foundation Model for ID-Consistent Human Faces}, 
      author={Paraperas Papantoniou, Foivos and Lattas, Alexandros and Moschoglou, Stylianos and Deng, Jiankang and Kainz, Bernhard and Zafeiriou, Stefanos},
      booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
      year={2024}
}