Home

Awesome

Exp-GAN: 3D-Aware Facial Image Generation with Expression Control

image

This repository is the official implementation of the ACCV 2022 paper Exp-GAN: 3D-Aware Facial Image Generation with Expression Control.

Yeonkyeong Lee, Taeho Choi, Hyunsung Go, Hyunjoon Lee, Sunghyun Cho, and Junho Kim.

Installation

Requirements for using pytorch3d

pip install -r requirements.txt
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
git checkout v0.7.0
pip install -e .
cd -

Dataset and model

Download the aligned FFHQ dataset images from the official repository, and place them under data/FFHQ/img.

Annotations for DECA parameters from FFHQ dataset (head pose, shape and expression) can be downloaded below(place the files under data/FFHQ/annots)

DECA is used to generate facial texture, download required assets by running

cd data
sh download_deca.sh
cd -

Below we present the dataset folder tree:

data/
└── DECA/
    ├── data/
    └── indices_ear_noeye.pkl
└── demo/
    └── meta_smooth.json
└── FFHQ/
    ├── annots/
        ├── ffhq_deca_ear_ortho_flipped.json
        └── ffhq_deca_ear_ortho.pkl
    └── img/

Please refer experiments/config/config.yaml to see how the data is used.

<!-- - dataset: - FFHQ : - dataset_root: `data/FFHQ/` - images: `<dataset_root>/img` - DECA parameters: `<dataset_root>/ffhq_deca_ear_ortho_1217.pkl`, `<dataset_root>/ffhq_deca_ear_ortho_flipped_0207.json` - masking indices : `<dataset_root>/indices_ear_noeye.pkl` - DECA : `data/DECA/` - pre-trained model : `data/pretrained_checkpoint/is_alpha10.ckpt` - config : `data/pretrained_checkpoint/is_alpha10_config.yaml` -->

Training

Run the following script to train our model:

sh ./experiments/ffhq/train.sh

evaluation

A pretrained model can be downloaded here. Place the model file under pretrained_model/model_checkpoint.ckpt.

<!-- ; refer the demo notebook file to see how to use the checkpoint. -->

Run the following script to generate images for the FID evaluation:

python eval.py --cfg <cfg> --ckpt <ckpt> --savedir <savedir>

Then run the following to measure the FID between generated and real images:

python fid.py --root_real <root_real> --root_fake <root_fake> --batch_size 50

where <root_real> contains downsampled FFHQ images and <root_fake> contains images generated by eval.py.

Demo

Please check demo.ipynb to see how to generate some examples by using a pretrained model.

Pose interpolation

https://user-images.githubusercontent.com/29425882/196860527-eff17dde-0c6f-4a54-82fa-73b169dfb667.mp4

pose_interp

Expression interpolation

https://user-images.githubusercontent.com/29425882/196860094-f403301f-27fa-41be-b6b1-d358f3826b71.mp4

expression_interp

Pose and expression interpolation

https://user-images.githubusercontent.com/29425882/196860427-2a911e91-5be5-4c98-aa06-9455b4c807ce.mp4

Low- and high-resolution results before and after StyleGAN upsampling

https://user-images.githubusercontent.com/29425882/196860562-1e5bd71f-3c76-45ce-9040-f355396dc3d4.mp4

Shape interpolation

shape_interp

Latent vector (w space) interpolation

identity_interp

Contact

This project is maintained by

License

Copyright (c) 2022 POSTECH, Kookmin University, Kakao Brain Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0 (see LICENSE for details)