Home

Awesome

<p align=center> :fire: RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors (CVPR2023)</p>

Python 3.8 pytorch 1.12.0

This is the official PyTorch codes for the paper.

RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors<br> Ruiqi Wu, Zhengpeng Duan, Chunle Guo<sup>*</sup>, Zhi Chai, Chongyi Li ( * indicates corresponding author)<br> The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023

framework_img

[Arxiv Paper] [中文版 (TBD)] [Website Page] [Dataset (pwd:qqqo)]

:rocket: Highlights:

:page_facing_up: Todo-list

Demo

<img src="figs/fig1.png" width="800px"> <img src="figs/fig2.png" width="800px">

Video examples

<img src="https://github.com/RQ-Wu/RIDCP/blob/master/figs/mountain.gif?raw=true" width="390px"/>         <img src="https://github.com/RQ-Wu/RIDCP/blob/master/figs/car.gif?raw=true" width="390px"/>

Dependencies and Installation

# git clone this repository
git clone https://github.com/RQ-Wu/RIDCP.git
cd RIDCP

# create new anaconda env
conda create -n ridcp python=3.8
conda activate ridcp 

# install python dependencies
pip install -r requirements.txt
BASICSR_EXT=True python setup.py develop

Get Started

Prepare pretrained models & dataset

  1. Downloading pretrained checkpoints
<table> <thead> <tr> <th>Model</th> <th> Description </th> <th>:link: Download Links </th> </tr> </thead> <tbody> <tr> <td>HQPs</td> <th>VQGAN pretrained on high-quality data.</th> <th rowspan="3"> [<a href="">Google Drive (TBD)</a>] [<a href="https://pan.baidu.com/s/1ps9dPmerWyXILxb6lkHihQ">Baidu Disk (pwd: huea)</a>] </th> </tr> <tr> <td>RIDCP</td> <th>Dehazing network trained on data generated by our pipeline.</th> </tr> <tr> <td>CHM</td> <th>Weight for adjusting controllable HQPs matching.</th> </tr> </tbody> </table>
  1. Preparing data for training
<table> <thead> <tr> <th>Dataset</th> <th> Description </th> <th>:link: Download Links </th> </tr> </thead> <tbody> <tr> <td>rgb_500</td> <th>500 clear RGB images as the input of our phenomenological degradation pipeline</th> <th rowspan="2"> [<a href="">Google Drive (TBD)</a>] [<a href="https://pan.baidu.com/s/1oX3AZkVlEa7S1sSO12r47Q">Baidu Disk (pwd: qqqo)</a>] </th> </tr> <tr> <td>depth_500</td> <th>Corresponding depth map generated by RA-Depth(https://github.com/hmhemu/RA-Depth).</th> </tr> <tr> <td>Flick2K, DIV2K</td> <th>High-quality data for VQGAN pre-training</th> <th>-</th> </tr> </tbody> </table>
  1. The final directory structure will be arranged as:
datasets
    |- clear_images_no_haze_no_dark_500
        |- xxx.jpg
        |- ...
    |- depth_500
        |- xxx.npy
        |- ...
    |- Flickr2K
    |- DIV2K

pretrained_models
    |- pretrained_HQPs.pth
    |- pretrained_RIDCP.pth
    |- weight_for_matching_dehazing_Flickr.pth

Quick demo

Run demos to process the images in dir ./examples/ by following commands:

python inference_ridcp.py -i examples -w pretrained_models/pretrained_RIDCP.pth -o results --use weight --alpha -21.25

Train RIDCP

Step 1: Pretrain a VQGAN on high-quality dataset

TBD

Step 2: Train our RIDCP

CUDA_VISIBLE_DEVICES=X,X,X,X python basicsr/train.py --opt options/RIDCP.yml

Step3: Adjust our RIDCP

TBD

Citation

If you find our repo useful for your research, please cite us:

@inproceedings{wu2023ridcp,
    title={RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors},
    author={Wu, Ruiqi and Duan, Zhengpeng and Guo, Chunle and Chai, Zhi and Li, Chongyi},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2023}
}

License

Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.

Acknowledgement

This repository is maintained by Ruiqi Wu.