Awesome
<div align="center"> <p align="center"> <img src="figs/logo.png" width="240px"> </p> </div>Binarized Diffusion Model for Image Super-Resolution
Zheng Chen, Haotong Qin, Yong Guo, Xiongfei Su, Xin Yuan, Linghe Kong, and Yulun Zhang, "Binarized Diffusion Model for Image Super-Resolution", NeurIPS, 2024
[poject] [arXiv] [supplementary material] [visual results] [pretrained models]
🔥🔥🔥 News
- 2024-10-23: Project Page is accessible. 📃📃📃
- 2024-10-14: Code and pre-trained models are released. ⭐️⭐️⭐️
- 2024-09-26: BI-DiffSR is accepted at NeurIPS 2024. 🎉🎉🎉
- 2024-06-09: This repo is released.
Abstract: Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods.
HR | LR | SR3 (FP) | BBCU | BI-DiffSR (ours) |
---|---|---|---|---|
<img src="figs/compare/ComS_img_023_HR_x4.png" height=80> | <img src="figs/compare/ComS_img_023_Bicubic_x4.png" height=80> | <img src="figs/compare/ComS_img_023_SR3_x4.png" height=80> | <img src="figs/compare/ComS_img_023_BBCU_x4.png" height=80> | <img src="figs/compare/ComS_img_023_BI-DiffSR_x4.png" height=80> |
<img src="figs/compare/ComS_img_033_HR_x4.png" height=80> | <img src="figs/compare/ComS_img_033_Bicubic_x4.png" height=80> | <img src="figs/compare/ComS_img_033_SR3_x4.png" height=80> | <img src="figs/compare/ComS_img_033_BBCU_x4.png" height=80> | <img src="figs/compare/ComS_img_033_BI-DiffSR_x4.png" height=80> |
TODO
- Release code and pretrained models
Dependencies
- Python 3.9
- PyTorch 1.13.1+cu117
# Clone the github repo and go to the default directory 'BI-DiffSR'.
git clone https://github.com/zhengchen1999/BI-DiffSR.git
conda create -n bi_diffsr python=3.9
conda activate bi_diffsr
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
git clone https://github.com/huggingface/diffusers.git
cd diffusers
pip install -e ".[torch]"
Contents
<a name="datasets"></a> Datasets
Used training and testing sets can be downloaded as follows:
Training Set | Testing Set | Visual Results |
---|---|---|
DIV2K (800 training images, 100 validation images) + Flickr2K (2650 images) [complete training dataset DF2K: Google Drive / Baidu Disk] | Set5 + Set14 + BSD100 + Urban100 + Manga109 [complete testing dataset: Google Drive / Baidu Disk] | Google Drive / Baidu Disk |
Download training and testing datasets and put them into the corresponding folders of datasets/
.
<a name="models"></a> Models
Method | Params (M) | FLOPs (G) | PSNR (dB) | LPIPS | Model Zoo | Visual Results |
---|---|---|---|---|---|---|
BI-DiffSR | 4.58 | 36.67 | 24.11 | 0.1823 | Google Drive | Google Drive |
The performance is reported on Urban100 (×4). Output size of FLOPs is 3×256×256.
<a name="training"></a> Training
-
The ×2 task requires 4*8 GB VRAM, and the ×4 task requires 4*20 GB VRAM.
-
Download training (DF2K, already processed) and testing (Set5, BSD100, Urban100, Manga109, already processed) datasets, place them in
datasets/
. -
Run the following scripts. The training configuration is in
options/train/
.# BI-DiffSR, input=64x64, 4 GPUs python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train/train_BI_DiffSR_x2.yml --launcher pytorch python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train/train_BI_DiffSR_x4.yml --launcher pytorch
-
The training experiment is in
experiments/
.
<a name="testing"></a> Testing
-
Download the pre-trained models and place them in
experiments/pretrained_models/
.We provide pre-trained models for image SR (×2, ×4).
-
Download testing (Set5, BSD100, Urban100, Manga109) datasets, place them in
datasets/
. -
Run the following scripts. The testing configuration is in
options/test/
.# BI-DiffSR, reproduces results in Table 2 of the main paper python test.py -opt options/test/test_BI_DiffSR_x2.yml python test.py -opt options/test/test_BI_DiffSR_x4.yml
Due to the randomness of diffusion model (diffusers), results may slightly vary.
-
The output is in
results/
.
<a name="results"></a> Results
We achieved state-of-the-art performance. Detailed results can be found in the paper.
<details> <summary>Quantitative Comparisons (click to expand)</summary>- Results in Table 2 (main paper)
- Results in Figure 8 (main paper)
- Results in Figure 5 (supplemental material)
- Results in Figure 6 (supplemental material)
<a name="citation"></a> Citation
If you find the code helpful in your research or work, please cite the following paper(s).
@inproceedings{chen2024binarized,
title={Binarized Diffusion Model for Image Super-Resolution},
author={Chen, Zheng and Qin, Haotong and Guo, Yong and Su, Xiongfei and Yuan, Xin and Kong, Linghe and Zhang, Yulun},
booktitle={NeurIPS},
year={2024}
}
<a name="acknowledgements"></a> Acknowledgements
This code is built on BasicSR, Image-Super-Resolution-via-Iterative-Refinement.