Awesome
2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution
Kai Liu, Haotong Qin, Yong Guo, Xin Yuan, Linghe Kong, Guihai Chen, and Yulun Zhang, "2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution", NeurIPS, 2024
[arXiv] [visual results] [pretrained models]
🔥🔥🔥 News
- 2024-10-23: Code is released. ⭐️⭐️⭐️
- 2024-09-26: 2DQuant is accepted at NeurIPS 2024. 🎉🎉🎉
- 2024-06-09: This repo is released.
Abstract: Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment, which allows advanced SR models to enjoy compact low-bit parameters and efficient integer/bitwise constructions for storage compression and inference acceleration, respectively. However, it is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts. Despite several efforts to alleviate the degradation, the transformer-based SR model still suffers severe degradation due to its distinctive activation distribution. In this work, we present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization. The proposed method first investigates the weight and activation and finds that the distribution is characterized by coexisting symmetry and asymmetry, long tails. Specifically, we propose Distribution-Oriented Bound Initialization (DOBI), using different searching strategies to search a coarse bound for quantizers. To obtain refined quantizer parameters, we further propose Distillation Quantization Calibration (DQC), which employs a distillation approach to make the quantized model learn from its FP counterpart. Through extensive experiments on different bits and scaling factors, the performance of DOBI can reach the state-of-the-art (SOTA) while after stage two, our method surpasses existing PTQ in both metrics and visual effects. 2DQuant gains an increase in PSNR as high as 4.52dB on Set5 ($\times 2$) compared with SOTA when quantized to 2-bit and enjoys a 3.60 $\times$ compression ratio and 5.08 $\times$ speedup ratio.
HR | LR | SwinIR-light (FP) | DBDC+Pac | 2DQuant (ours) |
---|---|---|---|---|
<img src="figures/comp/img072-gt.png" height=80> | <img src="figures/comp/img072-bicubic.png" height=80> | <img src="figures/comp/img072-fp.png" height=80> | <img src="figures/comp/img072-pac.png" height=80> | <img src="figures/comp/img072-ours.png" height=80> |
<img src="figures/comp/img092-gt.png" height=80> | <img src="figures/comp/img092-bicubic.png" height=80> | <img src="figures/comp/img092-fp.png" height=80> | <img src="figures/comp/img092-pac.png" height=80> | <img src="figures/comp/img092-ours.png" height=80> |
Dependencies
- Python 3.8
- PyTorch 1.8.0
- NVIDIA GPU + CUDA
# Clone the github repo and go to the default directory '2DQuant'.
git clone https://github.com/Kai-Liu001/2DQuant.git
cd 2DQuant
conda create -n tdquant python=3.8
conda activate tdquant
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
python setup.py develop
Contents
<a name="datasets"></a> Datasets
Used training and testing sets can be downloaded as follows:
Training Set | Testing Set |
---|---|
DIV2K (800 training images, 100 validation images) + Flickr2K (2650 images) [complete training dataset DF2K: Google Drive / Baidu Disk] | Set5 + Set14 + BSD100 + Urban100 + Manga109 [complete testing dataset: Google Drive / Baidu Disk] |
Download training and testing datasets and put them into the corresponding folders of datasets/
.
<a name="models"></a>Models
The pretrained models can be downloaded from Google drive and Baidu drive.
<a name="training"></a> Training
Training is used to optimize the quantizers' parameters.
-
Download training (DF2K, already processed) and testing (Set5, BSD100, Urban100, Manga109, already processed) datasets, place them in
datasets/
. -
Download cali data from Google drive or Baidu drive.
-
Place them in
keydata/
or runscripts/2DQuant-getcalidata.sh
to obtaincalidata
. -
Run the following scripts. The training configuration is in
options/train/
. More scripts can be found inscripts/2DQuant-train.sh
.# 2DQuant 4bit x4 python basicsr/train.py -opt options/train/train_2DQuant_x4.yml --force_yml bit=4 name=train_2DQuant_x4_bit4
-
The training experiment is in
experiments/
.
<a name="testing"></a> Testing
-
Download the pre-trained models and place them in
experiments/pretrained_models/
. -
Download testing (Set5, BSD100, Urban100, Manga109) datasets, place them in
datasets/
. -
Run the following scripts. The testing configuration is in
options/test/
.# 2DQuant, reproduces results in Table 3 of the main paper python basicsr/test.py -opt options/test/test_2DQuant_x2.yml --force_yml bit=4 name=test_2DQuant_x2_bit4 path:pretrain_network_Q=experiments/train_2DQuant_x2_bit4/models/net_Q_3200.pth
-
The output is in
results/
.
<a name="results"></a> Results
We achieved state-of-the-art performance. Detailed results can be found in the paper. If you'd like to compare with us or see our results detailedly, all visual results can be downloaded from Google drive and Baidu drive.
<details> <summary>Click to expand</summary>- quantitative comparisons in Table 3 (main paper)
- visual comparison in Figure 1 (main paper)
- visual comparison in Figure 6 (main paper)
- visual comparison in Figure 12 (supplemental material)
<a name="citation"></a> Citation
If you find the code helpful in your research or work, please cite the following paper(s).
@inproceedings{liu20242dquant,
title={2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution},
author={Liu, Kai and Qin, Haotong and Guo, Yong and Yuan, Xin and Kong*, Linghe and Chen, Guihai and Zhang, Yulun},
booktitle={NeurIPS},
year={2024}
}
<a name="acknowledgements"></a> Acknowledgements
This code is built on BasicSR.