Awesome
<p align="center"> <img src="assets/CodeFormer_logo.png" height=110> </p>Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)
Paper | Project Page | Video
<a href="https://colab.research.google.com/drive/1m52PNveE4PBhYrecj34cnpEeiHcC5LTb?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
Shangchen Zhou, Kelvin C.K. Chan, Chongyi Li, Chen Change Loy
S-Lab, Nanyang Technological University
<img src="assets/network.jpg" width="800px"/>:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs:
Update
- 2023.07.20: Integrated to :panda_face: OpenXLab. Try out online demo!
- 2023.04.19: :whale: Training codes and config files are public available now.
- 2023.04.09: Add features of inpainting and colorization for cropped and aligned face images.
- 2023.02.10: Include
dlib
as a new face detector option, it produces more accurate face identity. - 2022.10.05: Support video input
--input_path [YOUR_VIDEO.mp4]
. Try it to enhance your videos! :clapper: - 2022.09.14: Integrated to :hugs: Hugging Face. Try out online demo!
- 2022.09.09: Integrated to :rocket: Replicate. Try out online demo!
- More
TODO
- Add training code and config files
- Add checkpoint and script for face inpainting
- Add checkpoint and script for face colorization
-
Add background image enhancement
:panda_face: Try Enhancing Old Photos / Fixing AI-arts
<img src="assets/imgsli_1.jpg" height="226px"/> <img src="assets/imgsli_2.jpg" height="226px"/> <img src="assets/imgsli_3.jpg" height="226px"/>
Face Restoration
<img src="assets/restoration_result1.png" width="400px"/> <img src="assets/restoration_result2.png" width="400px"/> <img src="assets/restoration_result3.png" width="400px"/> <img src="assets/restoration_result4.png" width="400px"/>
Face Color Enhancement and Restoration
<img src="assets/color_enhancement_result1.png" width="400px"/> <img src="assets/color_enhancement_result2.png" width="400px"/>
Face Inpainting
<img src="assets/inpainting_result1.png" width="400px"/> <img src="assets/inpainting_result2.png" width="400px"/>
Dependencies and Installation
- Pytorch >= 1.7.1
- CUDA >= 10.1
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/sczhou/CodeFormer
cd CodeFormer
# create new anaconda env
conda create -n codeformer python=3.8 -y
conda activate codeformer
# install python dependencies
pip3 install -r requirements.txt
python basicsr/setup.py develop
conda install -c conda-forge dlib (only for face detection or cropping with dlib)
<!-- conda install -c conda-forge dlib -->
Quick Inference
Download Pre-trained Models:
Download the facelib and dlib pretrained models from [Releases | Google Drive | OneDrive] to the weights/facelib
folder. You can manually download the pretrained models OR download by running the following command:
python scripts/download_pretrained_models.py facelib
python scripts/download_pretrained_models.py dlib (only for dlib face detector)
Download the CodeFormer pretrained models from [Releases | Google Drive | OneDrive] to the weights/CodeFormer
folder. You can manually download the pretrained models OR download by running the following command:
python scripts/download_pretrained_models.py CodeFormer
Prepare Testing Data:
You can put the testing images in the inputs/TestWhole
folder. If you would like to test on cropped and aligned faces, you can put them in the inputs/cropped_faces
folder. You can get the cropped and aligned faces by running the following command:
# you may need to install dlib via: conda install -c conda-forge dlib
python scripts/crop_align_face.py -i [input folder] -o [output folder]
Testing:
[Note] If you want to compare CodeFormer in your paper, please run the following command indicating --has_aligned
(for cropped and aligned face), as the command for the whole image will involve a process of face-background fusion that may damage hair texture on the boundary, which leads to unfair comparison.
Fidelity weight w lays in [0, 1]. Generally, smaller w tends to produce a higher-quality result, while larger w yields a higher-fidelity result. The results will be saved in the results
folder.
đ§đģ Face Restoration (cropped and aligned face)
# For cropped and aligned faces (512x512)
python inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path]
:framed_picture: Whole Image Enhancement
# For whole image
# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN
# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN
python inference_codeformer.py -w 0.7 --input_path [image folder]|[image path]
:clapper: Video Enhancement
# For Windows/Mac users, please install ffmpeg first
conda install -c conda-forge ffmpeg
# For video clips
# Video path should end with '.mp4'|'.mov'|'.avi'
python inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path]
đ Face Colorization (cropped and aligned face)
# For cropped and aligned faces (512x512)
# Colorize black and white or faded photo
python inference_colorization.py --input_path [image folder]|[image path]
đ¨ Face Inpainting (cropped and aligned face)
# For cropped and aligned faces (512x512)
# Inputs could be masked by white brush using an image editing app (e.g., Photoshop)
# (check out the examples in inputs/masked_faces)
python inference_inpainting.py --input_path [image folder]|[image path]
Training:
The training commands can be found in the documents: English | įŽäŊä¸æ.
Citation
If our work is useful for your research, please consider citing:
@inproceedings{zhou2022codeformer,
author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},
title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},
booktitle = {NeurIPS},
year = {2022}
}
License
This project is licensed under <a rel="license" href="https://github.com/sczhou/CodeFormer/blob/master/LICENSE">NTU S-Lab License 1.0</a>. Redistribution and use should follow this license.
Acknowledgement
This project is based on BasicSR. Some codes are brought from Unleashing Transformers, YOLOv5-face, and FaceXLib. We also adopt Real-ESRGAN to support background image enhancement. Thanks for their awesome works.
Contact
If you have any questions, please feel free to reach me out at shangchenzhou@gmail.com
.