Home

Awesome

<p align="center"> <img src="figs/logo.png" width="400"> </p> <div align="center"> <h2>Improving the Stability and Efficiency of Diffusion Models for Content Consistent Super-Resolution</h2>

<a href='https://arxiv.org/pdf/2401.00877'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>

Lingchen Sun<sup>1,2</sup> | Rongyuan Wu<sup>1,2</sup> | Jie Liang<sup>2</sup> | Zhengqiang Zhang<sup>1,2</sup> | Hongwei Yong<sup>1</sup> | Lei Zhang<sup>1,2</sup>

<sup>1</sup>The Hong Kong Polytechnic University, <sup>2</sup>OPPO Research Institute

</div>

:star: If CCSR is helpful to your images or projects, please help star this repo. Thanks! :hugs:

๐Ÿงกเพ€เฝฒ What's New in CCSR-v2?

We have implemented the CCSR-v2 code based on the Diffusers. Compared to CCSR-v1, CCSR-v2 brings a host of upgrades:

ccsr Visual comparisons between the SR outputs with the same input low-quality image but two different noise samples by different DM-based methods. S denotes diffusion sampling timesteps. Existing DM-based methods, including StableSR, PASD, SeeSR, SUPIR and AddSR, show noticeable instability with the different noise samples. OSEDiff directly takes low-quality image as input without noise sampling. It is deterministic and stable, but cannot perform multi-step diffusion for high generative capacity. In contrast, our proposed CCSR method is flexible for both multi-step diffusion and single-step diffusion, while producing stable results with high fidelity and visual quality.

โฐ Update

๐ŸŒŸ Overview Framework

ccsr

๐Ÿ˜ Visual Results

Demo on Real-world SR

<img src="figs/compare_1.png" height="213px"/> <img src="figs/compare_2.png" height="213px"/> <img src="figs/compare_3.png" height="213px"/> <img src="figs/compare_4.png" height="213px"/>

ccsr

ccsr For more comparisons, please refer to our paper for details.

๐Ÿ“ Quantitative comparisons

We propose new stability metrics, namely global standard deviation (G-STD) and local standard deviation (L-STD), to respectively measure the image-level and pixel-level variations of the SR results of diffusion-based methods.

More details about G-STD and L-STD can be found in our paper.

ccsr

โš™ Dependencies and Installation

## git clone this repository
git clone https://github.com/csslc/CCSR.git
cd CCSR


# create an environment with python >= 3.9
conda create -n ccsr python=3.9
conda activate ccsr
pip install -r requirements.txt

๐Ÿญ Quick Inference

For ease of comparison, we have provided the test results of CCSR-v2 on the DIV2K, RealSR, and DrealSR benchmarks with varying diffusion steps, which can be accessed via Google Drive.

Step 1: Download the pretrained models

Model NameDescriptionGoogleDriveBaiduNetdisk
ControlnetTrained in the stage 1.downloaddownload (pwd: ccsr)
VAETrained in the stage 2.downloaddownload (pwd: ccsr)
Pre-trained ControlnetThe pre-trained model of stage1.downloaddownload (pwd: ccsr)
Dino modelsThe pre-trained models for disc.downloaddownload (pwd: ccsr)

Step 2: Prepare testing data

You can put the testing images in the preset/test_datasets.

Step 3: Running testing command

For one-step diffusion process:

python test_ccsr_tile.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--controlnet_model_path preset/models \
--vae_model_path preset/models \
--baseline_name ccsr-v2 \
--image_path preset/test_datasets \
--output_dir experiments/test \
--sample_method ddpm \
--num_inference_steps 1 \
--t_min 0.0 \
--start_point lr \
--start_steps 999 \
--process_size 512 \
--guidance_scale 1.0 \
--sample_times 1 \
--use_vae_encode_condition \
--upscale 4

For multi-step diffusion process:

python test_ccsr_tile.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--controlnet_model_path preset/models \
--vae_model_path preset/models \
--baseline_name ccsr-v2 \
--image_path preset/test_datasets \
--output_dir experiments/test \
--sample_method ddpm \
--num_inference_steps 6 \
--t_max 0.6667 \
--t_min 0.5 \
--start_point lr \
--start_steps 999 \
--process_size 512 \
--guidance_scale 4.5 \
--sample_times 1 \
--use_vae_encode_condition \
--upscale 4

We integrate tile_diffusion and tile_vae to the test_ccsr_tile.py to save the GPU memory for inference. You can change the tile size and stride according to the VRAM of your device.

python test_ccsr_tile.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--controlnet_model_path preset/models \
--vae_model_path preset/models \
--baseline_name ccsr-v2 \
--image_path preset/test_datasets \
--output_dir experiments/test \
--sample_method ddpm \
--num_inference_steps 6 \
--t_max 0.6667 \
--t_min 0.5 \
--start_point lr \
--start_steps 999 \
--process_size 512 \
--guidance_scale 4.5 \
--sample_times 1 \
--use_vae_encode_condition \
--upscale 4 \
--tile_diffusion \
--tile_diffusion_size 512 \
--tile_diffusion_stride 256 \
--tile_vae \
--vae_decoder_tile_size 224 \
--vae_encoder_tile_size 1024 \

You can obtain N different SR results by setting sample_times as N to test the stability of CCSR. The data folder should be like this:

 experiments/test
 โ”œโ”€โ”€ sample00   # the first group of SR results 
 โ””โ”€โ”€ sample01   # the second group of SR results 
   ...
 โ””โ”€โ”€ sampleN   # the N-th group of SR results 

๐Ÿ“ Evaluation

  1. Calculate the Image Quality Assessment for each restored group.

    Fill in the required information in cal_iqa.py and run, then you can obtain the evaluation results in the folder like this:

     log_path
     โ”œโ”€โ”€ log_name_npy  # save the IQA values of each restored group as the npy files
     โ””โ”€โ”€ log_name.log   # log recode
    
  2. Calculate the G-STD value for the diffusion-based SR method.

    Fill in the required information in iqa_G-STD.py and run, then you can obtain the mean IQA values of N restored groups and G-STD value.

  3. Calculate the L-STD value for the diffusion-based SR method.

    Fill in the required information in iqa_L-STD.py and run, then you can obtain the L-STD value.

๐Ÿš‹ Train

Step1: Prepare training data

Generate txt file for the training set. Fill in the required information in get_path and run, then you can obtain the txt file recording the paths of ground-truth images. You can save the txt file into preset/gt_path.txt.

Step2: Train Stage1 Model

  1. Download pretrained Stable Diffusion v2.1 to provide generative capabilities.

    wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
    
  2. Start training.

    CUDA_VISIBLE_DEVICES="0,1,2,3," accelerate launch train_ccsr_stage1.py \
    --pretrained_model_name_or_path="preset/models/stable-diffusion-2-1-base" \
    --controlnet_model_name_or_path='preset/models/pretrained_controlnet' \
    --enable_xformers_memory_efficient_attention \
    --output_dir="./experiments/ccsrv2_stage1" \
    --mixed_precision="fp16" \
    --resolution=512 \
    --learning_rate=5e-5 \
    --train_batch_size=4 \
    --gradient_accumulation_steps=6 \
    --dataloader_num_workers=0 \
    --checkpointing_steps=500 \
    --t_max=0.6667 \
    --max_train_steps=20000 \
    --dataset_root_folders 'preset/gt_path.txt' 
    

Step3: Train Stage2 Model

  1. Put the model obtained from the stage1 into controlnet_model_name_or_path.

  2. Start training.

    CUDA_VISIBLE_DEVICES="0,1,2,3," accelerate launch train_ccsr_stage2.py \
    --pretrained_model_name_or_path="preset/models/stable-diffusion-2-1-base" \
    --controlnet_model_name_or_path='preset/models/model_stage1' \
    --enable_xformers_memory_efficient_attention \
    --output_dir="./experiments/ccsrv2_stage2" \
    --mixed_precision="fp16" \
    --resolution=512 \
    --learning_rate=5e-6 \
    --train_batch_size=2 \
    --gradient_accumulation_steps=8 \
    --checkpointing_steps=500 \
    --is_start_lr=True \
    --t_max=0.6667 \
    --num_inference_steps=1 \
    --is_module \
    --lambda_l2=1.0 \
    --lambda_lpips=1.0 \
    --lambda_disc=0.05 \
    --lambda_disc_train=0.5 \
    --begin_disc=100 \
    --max_train_steps=2000 \
    --dataset_root_folders 'preset/gt_path.txt'  
    

Citations

If our code helps your research or work, please consider citing our paper. The following are BibTeX references:

@article{sun2023ccsr,
  title={Improving the Stability of Diffusion Models for Content Consistent Super-Resolution},
  author={Sun, Lingchen and Wu, Rongyuan and Zhang, Zhengqiang and Yong, Hongwei and Zhang, Lei},
  journal={arXiv preprint arXiv:2401.00877},
  year={2024}
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

This project is based on ControlNet, BasicSR and SeeSR. Some codes are brought from ADDSR. Thanks for their awesome works.

Contact

If you have any questions, please contact: ling-chen.sun@connect.polyu.hk

<details> <summary>statistics</summary>

visitors

</details>