Home

Awesome

[ICCV 2023] ExposureDiffusion: Learning to Expose for Low-light Image Enhancement

Conference Static Badge Static Badge

Welcome! This is the official implementation of the paper "ExposureDiffusion: Learning to Expose for Low-light Image Enhancement".

News

(2023.8.08): The training and testing codes are released

(2023.7.14): 🎉 Our paper was accepted by ICCV 2023

Highlight

Overall

<p align="center"> <img src="./images_github/intro.png" width="60%" /> </p>

Alt text

Alt text

Prerequisites

Please install the packages required by ELD.

Besides, you may need to download the ELD dataset and SID dataset as follows

Train

For the training of models with the UNet backbone:

# For the training of P+g noise model (results in Table 2, 3, 4, 5)
python3 train_syn.py --name sid_Pg --include 4 --noise P+g --model eld_iter_model --with_photon --adaptive_res_and_x0 --iter_num 2 --epoch 300 --auxloss --continuous_noise --adaptive_loss

# For the training of P+G+r+u noise model, i.g., ELD noise model (results in Table 4, 5)
python3 train_syn.py --name sid_PGru --include 4 --noise P+G+r+u --model eld_iter_model --with_photon --adaptive_res_and_x0 --iter_num 2 --epoch 300 --auxloss --concat_origin --continuous_noise --adaptive_loss

# For the training of the model based on real-captured paried data (results in Table 4)
CUDA_VISIBLE_DEVICES=1 python3 train_real.py --name sid_real --model eld_iter_model --with_photon --adaptive_res_and_x0 --iter_num 2 --epoch 300 --auxloss --concat_origin --adaptive_loss

For the training of the models with NAFNet backbone (results in Table 5):

# NAFNet with P+g noise model
CUDA_VISIBLE_DEVICES=0 python3 train_syn.py --name sid_Pg_naf2 --include 4 --noise P+g --model eld_iter_model --with_photon --adaptive_res_and_x0 --iter_num 2 --epoch 300 --auxloss --continuous_noise --adaptive_loss --netG naf2

# NAFNet with ELD noise model
CUDA_VISIBLE_DEVICES=0 python3 train_syn.py --name sid_PGru_naf2 --include 4 --noise P+G+r+u --model eld_iter_model --with_photon --adaptive_res_and_x0 --iter_num 2 --epoch 300 --auxloss --concat_origin --continuous_noise --adaptive_loss --netG naf2

Pre-trained models

You can download the pre-trained models from google drive, which includes the following models

Test

For the evaluation of models, you shall use the same hyper-parameters (i.e., the same usage of --concat_origin ) with the training ones. For example, if you want to evaluate the performance of the models based on NAFNet, you shell use the following commands

# Test of the ELD+NAFNet model
python3 test_ELD.py --model eld_iter_model --model_path "the path of the ckpt" --include 4 --with_photon --adaptive_res_and_x0 -r --iter_num 2 --netG naf2 --concat_origin

# Test of the Pg+NAFNet model
python3 test_SID.py --model eld_iter_model --model_path "the path of the ckpt" --include 4 --with_photon --adaptive_res_and_x0 -r --iter_num 2 --netG naf2

For the evaluation of ELD dataset, we just keep the other settings the same, and only change the file name from test_SID.py to test_ELD.py. For example, the command to evaluate the performance of the UNet model trained on real data is as follows

python3 test_SID.py --model eld_iter_model --model_path checkpoints/sid_real/model_300_00386400.pt --concat_origin --adaptive_res_and_x0 --with_photon -r --include 4

To evaluate the effect of different inference steps, you can change the value of --iter_num (default: 2). You can get a similar result as the following one where we evaluate the quality of the predicted clean image of each step

<p align="center"> <img src="./images_github/iter.png" width="70%" /> </p>

Results

<p align="center"> <img src="./images_github/different_noise_models.png" width="65%" /> </p> <p align="center"> <img src="./images_github/different_bacbones.png" width="65%" /> </p>

Citation

If you find our code helpful in your research or work please cite our paper.

@article{wang2023exposurediffusion,
  title={ExposureDiffusion: Learning to Expose for Low-light Image Enhancement},
  author={Wang, Yufei and Yu, Yi and Yang, Wenhan and Guo, Lanqing and Chau, Lap-Pui and Kot, Alex C and Wen, Bihan},
  journal={arXiv preprint arXiv:2307.07710},
  year={2023}
}

Copyright

The purpose of the use is non-commercial research and/or private study.

Acknowledgement

This work is based on ELD and PMN. We sincelely appreciate the support from the authors.

Contact

If you would like to get in-depth help from me, please feel free to contact me (yufei001@ntu.edu.sg) with a brief self-introduction (including your name, affiliation, and position).