Home

Awesome

PWC

[IJCAI 2023 oral] Pyramid Diffusion Models For Low-light Image Enhancement

Paper | Project Page | Supplement Materials

Pyramid Diffusion Models For Low-light Image Enhancement <br>Dewei Zhou, Zongxin Yang, Yi Yang<br> In IJCAI'2023

Overall

Framework

Quantitative results

Evaluation on LOL

The evauluation results on LOL are as follows

MethodPSNRSSIMLPIPS
KIND20.870.800.17
KIND++21.300.820.16
Bread22.960.840.16
IAT23.380.810.26
HWMNet24.240.850.12
LLFLOW24.990.920.11
PyDiff (Ours)27.090.930.10

Dependencies and Installation

git clone https://github.com/limuloo/PyDIff.git
cd PyDiff
conda create -n PyDiff python=3.7
conda activate PyDiff
conda install pytorch==1.7.0 torchvision torchaudio cudatoolkit=11.0 -c pytorch
cd BasicSR-light
pip install -r requirements.txt
BASICSR_EXT=True sudo $(which python) setup.py develop
cd ../PyDiff
pip install -r requirements.txt
BASICSR_EXT=True sudo $(which python) setup.py develop

Dataset

You can refer to the following links to download the LOL dataset and put it in the following way:

PyDiff/
    BasicSR-light/
    PyDiff/
    dataset/
        LOLdataset/
            our485/
            eval15/

Pretrained Model

You can refer to the following links to download the pretrained model and put it in the following way:

PyDiff/
    BasicSR-light/
    PyDiff/
    pretrained_models/
        LOLweights.pth

Test

cd PyDiff/
CUDA_VISIBLE_DEVICES=0 python pydiff/train.py -opt options/infer.yaml

NOTE: When testing on your own dataset, set 'use_kind_align' in 'infer.yaml' to false. For details, please refer to https://github.com/limuloo/PyDIff/issues/6.

Train

Training with 2 GPUs

For training purposes, the utilization of the following commands is advised if you possess 2 GPUs with a memory capacity of 24GB or higher, as outlined in the paper.

cd PyDiff/
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=22666 pydiff/train.py -opt options/train_v1.yaml --launcher pytorch

Training with a single GPU

Otherwise, you can use the following commands for training, which requires 1 GPU with memory >=24GB.

cd PyDiff/
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port=22666 pydiff/train.py -opt options/train_v2.yaml --launcher pytorch

Training on a Custom Low-Level Task Dataset

Please update the following fields in the PyDiff/options/train_v3.yaml file: YOUR_TRAIN_DATASET_GT_ROOT, YOUR_TRAIN_DATASET_INPUT_ROOT, YOUR_EVAL_DATASET_GT_ROOT, and YOUR_EVAL_DATASET_INPUT_ROOT. If required, please also update the PyDiff/pydiff/data/lol_dataset.py. Finally, please employ the subsequent commands to initiate the training process:

cd PyDiff/
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=22666 pydiff/train.py -opt options/train_v3.yaml --launcher pytorch

Please feel free to customize additional parameters to meet your specific requirements. To enable PyDiff to train on a single GPU, the PyDiff/options/train_v2.yaml file can be consulted.

Citation

If you find our work useful for your research, please cite our paper

@article{zhou2023pyramid,
  title={Pyramid Diffusion Models For Low-light Image Enhancement},
  author={Zhou, Dewei and Yang, Zongxin and Yang, Yi},
  journal={arXiv preprint arXiv:2305.10028},
  year={2023}
}

Acknowledgement

Our code is partly built upon BasicSR. Thanks to the contributors of their great work.