Home

Awesome

Pose Adapted Shape Learning for Large-Pose Face Reenactment

PASL.png

Abstract: We propose the Pose Adapted Shape Learning (PASL) for large-pose face reenactment. The PASL framework consists of three modules, namely the Pose-Adapted face Encoder (PAE), the Cycle-consistent Shape Generator (CSG), and the Attention-Embedded Generator (AEG). Different from previous approaches that use a single face encoder for identity preservation, we propose multiple Pose-Adapted face Encodes (PAEs) to better preserve facial identity across large poses. Given a source face and a reference face, the CSG generates a recomposed shape that fuses the source identity and reference action in the shape space and meets the cycle consistency requirement. Taking the shape code and the source as inputs, the AEG learns the attention within the shape code and between the shape code and source style to enhance the generation of the desired target face. As existing benchmark datasets are inappropriate for evaluating large-pose face reenactment, we propose a scheme to compose large-pose face pairs and introduce the MPIE-LP (Large Pose) and VoxCeleb2-LP datasets as the new large-pose benchmarks. We compared our approach with state-of-the-art methods on MPIE-LP and VoxCeleb2-LP for large-pose performance and on VoxCeleb1 for the common scope of pose variation.

https://github.com/xxxxxx321/PASL/assets/151173571/88824fb8-fcaa-4035-a510-6e5cb1b9abd3

Getting Started

git clone https://github.com/AvLab-CV/PASL.git
cd PASL

Installation

  1. Install the requirements
     conda env create -f environment.yml
    
  2. Please refer to Pytorch3d to install pytorch3d.

Voxceleb2 LP Dataset

We offer the Voxceleb2 LP Dataset for download. GDrive

Training and Testing Lists

We provide the training and testing lists for MPIE-LP and Voxceleb2, as well as the testing list for Voxceleb1. GDrive

Demo Pretrained Model

Demo Pretrained Model
GDrive

Please place the checkpoint files in the ./experiment/demo directory.

Auxiliary Models

PathDescription
DECA modelPlease unzip and place them in the main directory.

Inference

python demo_cam.py
python demo_video.py
python demo_ui.py

You can use demo_cam.py for a camera demo, or demo_video.py for a video demo. Additionally, we also offer a UI method using demo_ui.py.

Validation

Download the Test Lists.

Please download the test lists for each dataset. Note that you will need to change the paths accordingly.

Validation Pretrained Models

The pretrained models for MPIE-LP, Voxceleb1, and Voxceleb2-LP can be downloaded from the following links.

Pretrained Models
MPIE-LP
Voxceleb1
Voxceleb2-LP

Please place the models for different datasets in the ./experiment directory.

Generate the Test Samples

Next, You can use test_sample_mpie.py, test_sample_vox1.py, and test_sample_vox2.py to generate test samples for each respective dataset. The generated images will be placed in the ./expr/eval directory.

python test_sample_mpie.py
python test_sample_vox1.py
python test_sample_vox2.py

Use PAE and ArcFace to Test CSIM

After generating the test samples, you can use mean_pae_csim.py and mean_arcface_csim.py to test CSIM. Please download the PAE pretrained model and the ArcFace pretrained model from the following links, and extract them directly to start testing.

Backbone
PAE
ArcFace
python mean_pae_csim.py
python mean_arcface_csim.py