Awesome
[Siggraph Asia 2023]Low-light Image Enhancement with Wavelet-based Diffusion Models [Paper].
<h4 align="center">Hai Jiang<sup>1,2</sup>, Ao Luo<sup>2</sup>, Haoqiang Fan<sup>2</sup>, Songchen Han<sup>1</sup>, Shuaicheng Liu<sup>3,2</sup></center> <h4 align="center">1.Sichuan University, 2.Megvii Technology, <h4 align="center">3.University of Electronic Science and Technology of China</center></center>Pipeline
Dependencies
pip install -r requirements.txt
Download the raw training and evaluation datasets
Paired datasets
LOLv1 dataset: Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. "Deep Retinex Decomposition for Low-Light Enhancement", BMVC, 2018. [Baiduyun (extracted code: sdd0)] [Google Drive]
LOLv2 dataset: Wenhan Yang, Haofeng Huang, Wenjing Wang, Shiqi Wang, and Jiaying Liu. "Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement", TIP, 2021. [Baiduyun (extracted code: l9xm)] [Google Drive]
LSRW dataset: Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network", Journal of Visual Communication and Image Representation, 2023. [Baiduyun (extracted code: wmrr)]
Unpaired datasets
Please refer to [Project Page of RetinexNet.]
Pre-trained Models
You can downlaod our pre-trained model from [Google Drive] and [Baidu Yun (extracted code:wsw7)]
How to train?
You need to modify datasets/dataset.py
slightly for your environment, and then
python train.py
How to test?
python evaluate.py
Visual comparison
Citation
If you use this code or ideas from the paper for your research, please cite our paper:
@article{jiang2023low,
title={Low-light image enhancement with wavelet-based diffusion models},
author={Jiang, Hai and Luo, Ao and Fan, Haoqiang and Han, Songchen and Liu, Shuaicheng},
journal={ACM Transactions on Graphics (TOG)},
volume={42},
number={6},
pages={1--14},
year={2023}
}
Acknowledgement
Part of the code is adapted from previous works: WeatherDiff, SDWNet, and MIMO-UNet. We thank all the authors for their contributions.