Home

Awesome

<h1 align="left">Frequency Compensated Diffusion Model for Real-scene Dehazing<a href="https://arxiv.org/abs/2308.10510"><img src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg" ></a> </h1>

This is an official implementation of Frequency Compensated Diffusion Model for Real-scene Dehazing by Pytorch.

<img src="misc/framework-v3.jpg" alt="show" style="zoom:90%;" /> <!-- (a) The training process of the proposed dehazing diffusion model. At step $t$, the network takes an augmented hazy image $I_{aug}$ and a noisy image $J_t$ as inputs. The network architecture adopts special skip connections, i.e., the Frequency Compensation Block (FCB), for better $\epsilon$-prediction. (b) The detailed block design of FCB. The input signals of FCB are enhanced at the mid-to-high frequency band so that the output spectrum has abundant higher frequency modes. (c) The sampling process of the proposed dehazing diffusion model. --> <!-- - <img src="./misc/train_prove_v3.jpg" alt="show" style="zoom:90%;" /> Power spectrum analysis on $\epsilon$-prediction results of DDPMs at varying $t$. (a) The power spectra of DDPM and DDPM+FCB. (b) The PSD analysis of DDPM and DDPM+FCB. (c) The KL distance between the spectrum of the predicted $\epsilon$ in (b) and that of the groundtruth. The smaller distance, the closer to groundtruth. -->

Getting started

Installation

pip install -r requirement.txt 

Data Prepare

Download train/eval data from the following links:

Training: RESIDE

Testing: I-Haze / O-Haze / Dense-Haze / Nh-Haze / RTTS

mkdir dataset

Re-organize the train/val images in the following file structure:

 #Training data file structure
dataset/RESIDE/
├── HR # ground-truth clear images.
├── HR_hazy_src # hazy images.
└── HR_depth # depth images (Generated by MonoDepth (github.com/OniroAI/MonoDepth-PyTorch)).

#Testing data (e.g. DenseHaze) file structure
dataset/{name}/
├── HR # ground-truth images.
└── HR_hazy # hazy images.

then make sure the correct data paths ("dataroot") in config/framework_da.json.

Pretrained Model

We prepared the pretrained model at:

TypeWeights
GeneratorOneDrive

Evaluation

Download the test set (e.g O-Haze). Simply put the test images in "dataroot" and set the correct path in config/framework_da.json about "dataroot";

Download the pretrained model and set the correct path in config/framework_da.json about "resume_state":

    "path": {
      "log": "logs",
      "tb_logger": "tb_logger",
      "results": "results",
      "checkpoint": "checkpoint",
      "resume_state": "./ddpm_fcb_230221_121802"
    }
    "val": {
      "name": "dehaze_val",
      "mode": "LRHR",
      "dataroot": "dataset/O-HAZE-PROCESS",
      ...
    }
# infer
python infer.py -c [config file]

The default config file is config/framework_da.json. The outputs images are located at /data/diffusion/results. One can change output path in core/logger.py.

Train

Prepare train dataset and set the correct paths in config/framework_da.json about "datasets";

If training from scratch, make sure "resume_state" is null in config/framework_da.json.

# infer
python train.py -c [config file]

Results

Quantitative comparison on real-world hazy data (RTTS). Bold and underline indicate the best and the second-best, respectively.

<p align="center"> <img src="misc/RTTS.jpg" width="600"> </p>

Todo