Home

Awesome

RestoreFormer++: Towards Real-World Blind Face Restoration from Undegraded Key-Value Pairs

paper_RestroeForemer++   paere_RestroeForemer   code_RestroeForemer++   code_RestroeForemer   demo

This repo is an official implementation of "RestoreFormer++: Towards Real-World Blind Face Restoration from Undegraded Key-Value Pairs".

RestoreFormer++ is an extension of our RestoreFormer. It proposes to restore a degraded face image with both fidelity and realness by using the powerful fully-spacial attention mechanisms to model the abundant contextual information in the face and its interplay with our reconstruction-oriented high-quality priors. Besides, it introduces an extending degrading model (EDM) that contains more realistic degraded scenarios for training data synthesizing, which helps to enhance its robustness and generalization towards real-world scenarios. Our results compared with the state-of-the-art methods and performance with/without EDM are in following:

images/fig1.png

images/fig3.png

ToDo List

Environment

pip install -r RF_requirements.txt

❗❗❗ Warning Different versions of pytorch-lightning and omegaconf may lead to errors or different results.

Preparations of dataset and models

Dataset:

Model: Both pretrained models used for training and the trained model of our RestoreFormer and RestoreFormer++ can be attained from Google Driver. Link these models to ./experiments.

<h2 id='gradio_demo'> Gradio Demo</h2>
python gradio_demo/app.py
<!-- ## <a id="metrics">Metrics</a> --> <h2 id="inference">Quick Inference</h2>
python inference.py -i data/aligned -o results/RF++/aligned -v RestoreFormer++ -s 2 --aligned --save
python inference.py -i data/raw -o results/RF++/raw -v RestoreFormer++ -s 2 --save
python inference.py -i data/aligned -o results/RF/aligned -v RestoreFormer -s 2 --aligned --save
python inference.py -i data/raw -o results/RF/raw -v RestoreFormer -s 2 --save

Note: Related codes are borrowed from GFPGAN.

Test

sh scripts/test.sh

scripts/test.sh

exp_name='RestoreFormer'
exp_name='RestoreFormerPlusPlus'

root_path='experiments'
out_root_path='results'
align_test_path='data/aligned'
# unalign_test_path='data/raw'
tag='test'

outdir=$out_root_path'/'$exp_name'_'$tag

if [ ! -d $outdir ];then
    mkdir -m 777 $outdir
fi

CUDA_VISIBLE_DEVICES=0 python -u scripts/test.py \
--outdir $outdir \
-r $root_path'/'$exp_name'/last.ckpt' \
-c 'configs/'$exp_name'.yaml' \
--test_path $align_test_path \
--aligned

Training

sh scripts/run.sh

scripts/run.sh

export BASICSR_JIT=True

# For RestoreFormer
# conf_name='HQ_Dictionary'
# conf_name='RestoreFormer'

# For RestoreFormer++
conf_name='ROHQD'
conf_name='RestoreFormerPlusPlus'

# gpus='0,1,2,3,4,5,6,7'
# node_n=1
# ntasks_per_node=8

root_path='PATH_TO_CHECKPOINTS'

gpus='0,'
node_n=1
ntasks_per_node=1

gpu_n=$(expr $node_n \* $ntasks_per_node)

python -u main.py \
--root-path $root_path \
--base 'configs/'$conf_name'.yaml' \
-t True \
--postfix $conf_name'_gpus'$gpu_n \
--gpus $gpus \
--num-nodes $node_n \
--random-seed True \
<!-- ## <a id="metrics">Metrics</a> -->

Metrics

sh scripts/metrics/run.sh

Note.

Citation

@article{wang2023restoreformer++,
  title={RestoreFormer++: Towards Real-World Blind Face Restoration from Undegraded Key-Value Pairs},
  author={Wang, Zhouxia and Zhang, Jiawei and Chen, Tianshui and Wang, Wenping and Luo, Ping},
  booktitle={IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI)},
  year={2023}
}

@article{wang2022restoreformer,
  title={RestoreFormer: High-Quality Blind Face Restoration from Undegraded Key-Value Pairs},
  author={Wang, Zhouxia and Zhang, Jiawei and Chen, Runjian and Wang, Wenping and Luo, Ping},
  booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}
<!-- ## Acknowledgement We thank everyone who makes their code and models available, especially [Taming Transformer](https://github.com/CompVis/taming-transformers), [basicsr](https://github.com/XPixelGroup/BasicSR), and [GFPGAN](https://github.com/TencentARC/GFPGAN). -->

Contact

For any question, feel free to email wzhoux@connect.hku.hk or zhouzi1212@gmail.com.