Awesome
VQFR (ECCV 2022 Oral)
<a href="https://colab.research.google.com/drive/1Nd_PUrHaYmeEAOF5f_Zi0VuOxlJ62gLr?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
- Colab Demo for VQFR
- Online demo: Replicate.ai (may need to sign in, return the whole image)
:triangular_flag_on_post: Updates
-
:white_check_mark: 2022.10.16 Clean research codes & Update VQFR-v2. In this version, we emphasize the restoration quality of the texture branch and balance fidelity with user control. <a href="https://colab.research.google.com/drive/1Nd_PUrHaYmeEAOF5f_Zi0VuOxlJ62gLr?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
-
:white_check_mark: Support enhancing non-face regions (background) with Real-ESRGAN.
-
:white_check_mark: The Colab Demo of VQFR is created.
-
:white_check_mark: The training/inference codes and pretrained models in paper are released.
This paper aims at investigating the potential and limitation of Vector-Quantized (VQ) dictionary for blind face restoration. <br> We propose a new framework VQFR – incoporating the Vector-Quantized Dictionary and the Parallel Decoder. Compare with previous arts, VQFR produces more realistic facial details and keep the comparable fidelity.
VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder
<p align="center"> <img src="assets/teaser.jpg"> </p>[Paper] [Project Page] [Video] [B站] [Poster] [Slides]<br> Yuchao Gu, Xintao Wang, Liangbin Xie, Chao Dong, Gen Li, Ying Shan, Ming-Ming Cheng<br> Nankai University; Tencent ARC Lab; Tencent Online Video; Shanghai AI Laboratory;<br> Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
:wrench: Dependencies and Installation
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.7
- Option: NVIDIA GPU + CUDA
- Option: Linux
Installation
-
Clone repo
git clone https://github.com/TencentARC/VQFR.git cd VQFR
-
Install dependent packages
# Build VQFR with extension pip install -r requirements.txt VQFR_EXT=True python setup.py develop # Following packages are required to run demo.py # Install basicsr - https://github.com/xinntao/BasicSR pip install basicsr # Install facexlib - https://github.com/xinntao/facexlib # We use face detection and face restoration helper in the facexlib package pip install facexlib # If you want to enhance the background (non-face) regions with Real-ESRGAN, # you also need to install the realesrgan package pip install realesrgan
:zap: Quick Inference
Download pre-trained VQFRv1/v2 models [Google Drive].
Inference
# for real-world image
python demo.py -i inputs/whole_imgs -o results -v 2.0 -s 2 -f 0.1
# for cropped face
python demo.py -i inputs/cropped_faces/ -o results -v 2.0 -s 1 -f 0.1 --aligned
Usage: python demo.py -i inputs/whole_imgs -o results -v 2.0 -s 2 -f 0.1 [options]...
-h show this help
-i input Input image or folder. Default: inputs/whole_imgs
-o output Output folder. Default: results
-v version VQFR model version. Option: 1.0. Default: 1.0
-f fidelity_ratio VQFRv2 model supports user control fidelity ratio, range from [0,1]. 0 for the best quality and 1 for the best fidelity. Default: 0
-s upscale The final upsampling scale of the image. Default: 2
-bg_upsampler background upsampler. Default: realesrgan
-bg_tile Tile size for background sampler, 0 for no tile during testing. Default: 400
-suffix Suffix of the restored faces
-only_center_face Only restore the center face
-aligned Input are aligned faces
-ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
:computer: Training
We provide the training codes for VQFR (used in our paper).
- Dataset preparation: FFHQ
- Download lpips weights [Google Drive] into experiments/pretrained_models/
Codebook Training
- Pre-train VQ codebook on FFHQ datasets.
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2022 vqfr/train.py -opt options/train/VQGAN/train_vqgan_v1_B16_800K.yml --launcher pytorch
- Or download our pretrained VQ codebook Google Drive and put them in the
experiments/pretrained_models
folder.
Restoration Training
-
Modify the configuration file
options/train/VQFR/train_vqfr_v1_B16_200K.yml
accordingly. -
Training
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2022 vqfr/train.py -opt options/train/VQFR/train_vqfr_v1_B16_200K.yml --launcher pytorch
:straight_ruler: Evaluation
We evaluate VQFR on one synthetic dataset CelebA-Test, and three real-world datasets LFW-Test, CelebChild and Webphoto-Test. For reproduce our evaluation results, you need to perform the following steps:
- Download testing datasets (or VQFR results) by the following links:
- Install related package and download pretrained models for different metrics:
# LPIPS
pip install lpips
# Deg.
cd metric_paper/
git clone https://github.com/ronghuaiyang/arcface-pytorch.git
mv arcface-pytorch/ arcface/
rm arcface/config/__init__.py arcface/models/__init__.py
# put pretrained models of different metrics to "experiments/pretrained_models/metric_weights/"
<table>
<tr>
<th>Metrics</th>
<th>Pretrained Weights</th>
<th>Download</th>
</tr>
<tr>
<td>FID</td>
<td>inception_FFHQ_512.pth</td>
<td rowspan="3"><a href="https://drive.google.com/drive/folders/1k3RCSliF6PsujCMIdCD1hNM63EozlDIZ?usp=sharing">Google Drive</a> </td>
</tr>
<tr>
<td>Deg</td>
<td>resnet18_110.pth</td>
</tr>
<tr>
<td>LMD</td>
<td>alignment_WFLW_4HG.pth</td>
</tr>
</table>
- Generate restoration results:
-
Specify the dataset_lq/dataset_gt to the testing dataset root in test_vqfr_v1.yml.
-
Then run the following command:
python vqfr/test.py -opt options/test/VQFR/test_vqfr_v1.yml
- Run evaluation:
# LPIPS|PSNR/SSIM|LMD|Deg.
python metric_paper/[calculate_lpips.py|calculate_psnr_ssim.py|calculate_landmark_distance.py|calculate_cos_dist.py]
-restored_folder folder_to_results -gt_folder folder_to_gt
# FID|NIQE
python metric_paper/[calculate_fid_folder.py|calculate_niqe.py] -restored_folder folder_to_results
:scroll: License
VQFR is released under Apache License Version 2.0.
:eyes: Acknowledgement
Thanks to the following open-source projects:
:clipboard: Citation
@inproceedings{gu2022vqfr,
title={VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder},
author={Gu, Yuchao and Wang, Xintao and Xie, Liangbin and Dong, Chao and Li, Gen and Shan, Ying and Cheng, Ming-Ming},
year={2022},
booktitle={ECCV}
}
:e-mail: Contact
If you have any question, please email yuchaogu9710@gmail.com
.