Home

Awesome

<p align="center"> <img src="assets/logo.png" width="400"> </p>

DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior

Paper | Project Page

visitors Open in OpenXLab Open In Colab

Xinqi Lin<sup>1,*</sup>, Jingwen He<sup>2,3,*</sup>, Ziyan Chen<sup>1</sup>, Zhaoyang Lyu<sup>2</sup>, Bo Dai<sup>2</sup>, Fanghua Yu<sup>1</sup>, Wanli Ouyang<sup>2</sup>, Yu Qiao<sup>2</sup>, Chao Dong<sup>1,2</sup>

<sup>1</sup>Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences<br><sup>2</sup>Shanghai AI Laboratory<br><sup>3</sup>The Chinese University of Hong Kong

<p align="center"> <img src="assets/teaser.png"> </p>
<p align="center"> <img src="assets/pipeline.png"> </p>

:star:If DiffBIR is helpful for you, please help star this repo. Thanks!:hugs:

:book:Table Of Contents

<a name="update"></a>:new:Update

<a name="visual_results"></a>:eyes:Visual Results On Real-world Images

Blind Image Super-Resolution

<img src="assets/visual_results/bsr6.png" height="223px"/> <img src="assets/visual_results/bsr7.png" height="223px"/> <img src="assets/visual_results/bsr4.png" height="223px"/>

<!-- [<img src="assets/visual_results/bsr1.png" height="223px"/>](https://imgsli.com/MTk5ODIy) [<img src="assets/visual_results/bsr2.png" height="223px"/>](https://imgsli.com/MTk5ODIz) [<img src="assets/visual_results/bsr3.png" height="223px"/>](https://imgsli.com/MTk5ODI0) [<img src="assets/visual_results/bsr5.png" height="223px"/>](https://imgsli.com/MjAxMjM0) --> <!-- [<img src="assets/visual_results/bsr1.png" height="223px"/>](https://imgsli.com/MTk5ODIy) [<img src="assets/visual_results/bsr5.png" height="223px"/>](https://imgsli.com/MjAxMjM0) -->

Blind Face Restoration

<!-- [<img src="assets/visual_results/bfr1.png" height="223px"/>](https://imgsli.com/MTk5ODI5) [<img src="assets/visual_results/bfr2.png" height="223px"/>](https://imgsli.com/MTk5ODMw) [<img src="assets/visual_results/bfr4.png" height="223px"/>](https://imgsli.com/MTk5ODM0) -->

<img src="assets/visual_results/whole_image1.png" height="370"/> <img src="assets/visual_results/whole_image2.png" height="370"/>

:star: Face and the background enhanced by DiffBIR.

Blind Image Denoising

<img src="assets/visual_results/bid1.png" height="215px"/> <img src="assets/visual_results/bid3.png" height="215px"/> <img src="assets/visual_results/bid2.png" height="215px"/>

8x Blind Super-Resolution With Patch-based Sampling

I often think of Bag End. I miss my books and my arm chair, and my garden. See, that's where I belong. That's home. --- Bilbo Baggins

<img src="assets/visual_results/tiled_sampling.png" height="480px"/>

<a name="todo"></a>:climbing:TODO

<a name="installation"></a>:gear:Installation

# clone this repo
git clone https://github.com/XPixelGroup/DiffBIR.git
cd DiffBIR

# create environment
conda create -n diffbir python=3.10
conda activate diffbir
pip install -r requirements.txt

Our new code is based on pytorch 2.2.2 for the built-in support of memory-efficient attention. If you are working on a GPU that is not compatible with the latest pytorch, just downgrade pytorch to 1.13.1+cu116 and install xformers 0.0.16 as an alternative.

<!-- Note the installation is only compatible with **Linux** users. If you are working on different platforms, please check [xOS Installation](assets/docs/installation_xOS.md). -->

<a name="pretrained_models"></a>:dna:Pretrained Models

Here we list pretrained weight of stage 2 model (IRControlNet) and our trained SwinIR, which was used for degradation removal during the training of stage 2 model.

Model NameDescriptionHuggingFaceBaiduNetdiskOpenXLab
v2.pthIRControlNet trained on filtered laion2b-endownloaddownload<br>(pwd: xiu3)download
v1_general.pthIRControlNet trained on ImageNet-1kdownloaddownload<br>(pwd: 79n9)download
v1_face.pthIRControlNet trained on FFHQdownloaddownload<br>(pwd: n7dx)download
codeformer_swinir.ckptSwinIR trained on ImageNet-1kdownloaddownload<br>(pwd: vfif)download

During inference, we use off-the-shelf models from other papers as the stage 1 model: BSRNet for BSR, SwinIR-Face used in DifFace for BFR, and SCUNet-PSNR for BID, while the trained IRControlNet remains unchanged for all tasks. Please check code for more details. Thanks for their work!

<!-- ## <a name="quick_start"></a>:flight_departure:Quick Start Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/`, then run the following command to interact with the gradio website. ```shell python gradio_diffbir.py \ --ckpt weights/general_full_v1.ckpt \ --config configs/model/cldm.yaml \ --reload_swinir \ --swinir_ckpt weights/general_swinir_v1.ckpt \ --device cuda ``` <div align="center"> <kbd><img src="assets/gradio.png"></img></kbd> </div> -->

<a name="inference"></a>:crossed_swords:Inference

We provide some examples for inference, check inference.py for more arguments. Pretrained weights will be automatically downloaded.

Blind Image Super-Resolution

python -u inference.py \
--version v2 \
--task sr \
--upscale 4 \
--cfg_scale 4.0 \
--input inputs/demo/bsr \
--output results/demo_bsr \
--device cuda

Blind Face Restoration

<a name="inference_fr"></a>

# for aligned face inputs
python -u inference.py \
--version v2 \
--task fr \
--upscale 1 \
--cfg_scale 4.0 \
--input inputs/demo/bfr/aligned \
--output results/demo_bfr_aligned \
--device cuda
# for unaligned face inputs
python -u inference.py \
--version v2 \
--task fr_bg \
--upscale 2 \
--cfg_scale 4.0 \
--input inputs/demo/bfr/whole_img \
--output results/demo_bfr_unaligned \
--device cuda

Blind Image Denoising

python -u inference.py \
--version v2 \
--task dn \
--upscale 1 \
--cfg_scale 4.0 \
--input inputs/demo/bid \
--output results/demo_bid \
--device cuda

Other options

Patch-based sampling

<a name="patch_based_sampling"></a>

Add the following arguments to enable patch-based sampling:

[command...] --tiled --tile_size 512 --tile_stride 256

Patch-based sampling supports super-resolution with a large scale factor. Our patch-based sampling is built upon mixture-of-diffusers. Thanks for their work!

Restoration Guidance

Restoration guidance is used to achieve a trade-off bwtween quality and fidelity. We default to closing it since we prefer quality rather than fidelity. Here is an example:

python -u inference.py \
--version v2 \
--task sr \
--upscale 4 \
--cfg_scale 4.0 \
--input inputs/demo/bsr \
--guidance --g_loss w_mse --g_scale 0.5 --g_space rgb \
--output results/demo_bsr_wg \
--device cuda

You will see that the results become more smooth.

Better Start Point For Sampling

Add the following argument to offer better start point for reverse sampling:

[command...] --better_start

This option prevents our model from generating noise in image background.

<a name="train"></a>:stars:Train

Stage 1

First, we train a SwinIR, which will be used for degradation removal during the training of stage 2.

<a name="gen_file_list"></a>

  1. Generate file list of training set and validation set, a file list looks like:

    /path/to/image_1
    /path/to/image_2
    /path/to/image_3
    ...
    

    You can write a simple python script or directly use shell command to produce file lists. Here is an example:

    # collect all iamge files in img_dir
    find [img_dir] -type f > files.list
    # shuffle collected files
    shuf files.list > files_shuf.list
    # pick train_size files in the front as training set
    head -n [train_size] files_shuf.list > files_shuf_train.list
    # pick remaining files as validation set
    tail -n +[train_size + 1] files_shuf.list > files_shuf_val.list
    
  2. Fill in the training configuration file with appropriate values.

  3. Start training!

    accelerate launch train_stage1.py --config configs/train/train_stage1.yaml
    

Stage 2

  1. Download pretrained Stable Diffusion v2.1 to provide generative capabilities. :bulb:: If you have ran the inference script, the SD v2.1 checkpoint can be found in weights.

    wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
    
  2. Generate file list as mentioned above. Currently, the training script of stage 2 doesn't support validation set, so you only need to create training file list.

  3. Fill in the training configuration file with appropriate values.

  4. Start training!

    accelerate launch train_stage2.py --config configs/train/train_stage2.yaml
    

Citation

Please cite us if our work is useful for your research.

@misc{lin2024diffbir,
      title={DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior}, 
      author={Xinqi Lin and Jingwen He and Ziyan Chen and Zhaoyang Lyu and Bo Dai and Fanghua Yu and Wanli Ouyang and Yu Qiao and Chao Dong},
      year={2024},
      eprint={2308.15070},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

This project is based on ControlNet and BasicSR. Thanks for their awesome work.

Contact

If you have any questions, please feel free to contact with me at linxinqi23@mails.ucas.ac.cn.