Home

Awesome

 

You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement

Qingsen Yan<sup></sup>, Yixu Feng<sup></sup>, Cheng Zhang<sup></sup>, Pei Wang, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang

<div align="center">

arXiv

PWC PWC

PWC PWC

PWC PWC

PWC PWC

PWC PWC

PWC PWC

PWC PWC

</div>

 

News 💡

⚙ Proposed HVI-CIDNet

<details close> <summary><b>HVI-CIDNet pipeline:</b></summary>

results3

</details> <details close> <summary><b>Lighten Cross-Attention (LCA) Block structure:</b></summary>

results4

</details>

🖼 Visual Comparison

<details close> <summary><b>LOL-v1, LOL-v2-real, and LOL-v2-synthetic:</b></summary>

results1

</details> <details close> <summary><b>DICM, LIME, MEF, NPE, and VV:</b></summary>

results2

</details> <details close> <summary><b>LOL-Blur:</b></summary>

results2

</details>

Weights and Results 🧾

All the weights that we trained on different datasets is available at [Baidu Pan] (code: yixu) and [One Drive] (code: yixu). Results on DICM, LIME, MEF, NPE, and VV datasets can be downloaded from [Baidu Pan] (code: yixu) and [One Drive] (code: yixu). Bolded fonts represent impressive metrics.

Folder (test datasets)PSNRSSIMLPIPSGT MeanResultsWeights Path
(LOLv1)<br />v1 w perc loss/ wo gt mean23.80910.85740.0856Baidu Pan and One DriveLOLv1/w_perc.pth
(LOLv1)<br />v1 w perc loss/ w gt mean27.71460.87600.0791dittoLOLv1/w_perc.pth
(LOLv1)<br />v1 wo perc loss/ wo gt mean23.50000.87030.1053Baidu Pan and One DriveLOLv1/wo_perc.pth
(LOLv1)<br />v1 wo perc loss/ w gt mean28.14050.88870.0988dittoLOLv1/wo_perc.pth
(LOLv2_real)<br />v2 wo perc loss/ wo gt mean23.42690.86220.1691Baidu Pan and One Drive(lost)
(LOLv2_real)<br />v2 wo perc loss/ w gt mean27.76190.88120.1649ditto(lost)
(LOLv2_real)<br />v2 best gt mean28.13870.89200.1008Baidu Pan and One DriveLOLv2_real/w_prec.pth
(LOLv2_real)<br />v2 best Normal24.11060.86750.1162Baidu Pan and One Drive(lost)
(LOLv2_real)<br />v2 best PSNR23.90400.86560.1219Baidu Pan and One DriveLOLv2_real/best_PSNR.pth
(LOLv2_real)<br />v2 best SSIM23.89750.87050.1185Baidu Pan and One DriveLOLv2_real/best_SSIM.pth
(LOLv2_real)<br />v2 best SSIM/ w gt mean28.39260.88730.1136NoneLOLv2_real/best_SSIM.pth
(LOLv2_syn)<br />syn wo perc loss/ wo gt mean25.70480.94190.0471Baidu Pan and One DriveLOLv2_syn/wo_perc.pth
(LOLv2_syn)<br />syn wo perc loss/ w gt mean29.56630.94970.0437dittoLOLv2_syn/wo_perc.pth
(LOLv2_syn)<br />syn w perc loss/ wo gt mean25.12940.93880.0450Baidu Pan and One DriveLOLv2_syn/w_perc.pth
(LOLv2_syn)<br />syn w perc loss/ w gt mean29.36660.95000.0403dittoLOLv2_syn/w_perc.pth
Sony_Total_Dark22.90390.67630.4109Baidu Pan and One DriveSID.pth
LOL-Blur26.57190.89030.1203Baidu Pan and One DriveLOL-Blur.pth
SICE-Mix13.42350.63600.3624Baidu Pan and One DriveSICE.pth
SICE-Grad13.44530.64770.3181Baidu Pan and One DriveSICE.pth
metricsDICMLIMEMEFNPEVV
NIQE3.794.133.563.743.21
BRISQUE21.4716.2513.7718.9230.63
Folder (test datasets)PSNRSSIMLPIPSGT MeanResultsWeights Path
(LOLv1)<br />v1 test finetuning25.40360.86520.0897Baidu Pan and One DriveLOLv1/test_finetuning.pth
(LOLv1)<br />v1 test finetuning27.59690.86960.0869dittoditto
DatasetsPSNRSSIMLPIPSGT MeanResultsWeights PathContributor DetailGPU
LOLv124.74010.86040.0896Baidu Pan and One DriveLOLv1/other/PSNR_24.74.pth[Xi’an Polytechnic University]<br />Yingjian LiNVIDIA 4070

1. Get Started 🌑

Dependencies and Installation

(1) Create Conda Environment

conda create --name CIDNet python=3.7.0
conda activate CIDNet

(2) Clone Repo

git clone git@github.com:Fediory/HVI-CIDNet.git

(3) Install Dependencies

cd HVI-CIDNet
pip install -r requirements.txt

Data Preparation

You can refer to the following links to download the datasets. Note that we only use low_blur and high_sharp_scaled subsets of LOL-Blur dataset.

Then, put them in the following folder:

<details open> <summary>datasets (click to expand)</summary>
├── datasets
	├── DICM
	├── LIME
	├── LOLdataset
		├── our485
			├──low
			├──high
		├── eval15
			├──low
			├──high
	├── LOLv2
		├── Real_captured
			├── Train
				├── Low
				├── Normal
			├── Test
				├── Low
				├── Normal
		├── Synthetic
			├── Train
				├── Low
				├── Normal
			├── Test
				├── Low
				├── Normal
	├── LOL_blur
		├── eval
			├── high_sharp_scaled
			├── low_blur
		├── test
			├── high_sharp_scaled
				├── 0012
				├── 0017
				...
			├── low_blur
				├── 0012
				├── 0017
				...
		├── train
			├── high_sharp_scaled
				├── 0000
				├── 0001
				...
			├── low_blur
				├── 0000
				├── 0001
				...
	├── MEF
	├── NPE
	├── SICE
		├── Dataset
			├── eval
				├── target
				├── test
			├── label
			├── train
				├── 1
				├── 2
				...
		├── SICE_Grad
		├── SICE_Mix
		├── SICE_Reshape
	├── Sony_total_dark
		├── eval
			├── long
			├── short
		├── test
			├── long
				├── 10003
				├── 10006
				...
			├── short
				├── 10003
				├── 10006
				...
		├── train
			├── long
				├── 00001
				├── 00002
				...
			├── short
				├── 00001
				├── 00002
				...
	├── VV
</details>

2. Testing 🌒

Download our weights from [Baidu Pan] (code: yixu) and put them in folder weights:

├── weights
    ├── LOLv1
        ├── w_perc.pth
        ├── wo_perc.pth
        ├── test_finetuning.pth
    ├── LOLv2_real
        ├── best_PSNR.pth
        ├── best_SSIM.pth
        ├── w_perc.pth
    ├── LOLv2_syn
        ├── w_perc.pth
        ├── wo_perc.pth
    ├── LOL-Blur.pth
    ├── SICE.pth
    ├── SID.pth
# LOLv1
python eval.py --lol --perc # weights that trained with perceptual loss
python eval.py --lol # weights that trained without perceptual loss

# LOLv2-real
python eval.py --lol_v2_real --best_GT_mean # you can choose best_GT_mean or best_PSNR or best_SSIM

# LOLv2-syn
python eval.py --lol_v2_syn --perc # weights that trained with perceptual loss
python eval.py --lol_v2_syn # weights that trained without perceptual loss

# SICE
python eval.py --SICE_grad # output SICE_grad
python eval.py --SICE_mix # output SICE_mix

# Sony-Total-Dark
python eval_SID_blur --SID

# LOL-Blur
python eval_SID_blur --Blur

# five unpaired datasets DICM, LIME, MEF, NPE, VV. 
# We note that: you can choose one weights in ./weights folder, and set the alpha float number (defualt=1.0) as illumination scale of the datasets.
# You can change "--DICM" to the other unpaired datasets "LIME, MEF, NPE, VV".
python eval.py --unpaired --DICM --unpaired_weights --alpha
# e.g.
python eval.py --unpaired --DICM --unpaired_weights ./weights/LOLv2_syn/w_perc.pth --alpha 0.9
# LOLv1
python measure.py --lol

# LOLv2-real
python measure.py --lol_v2_real

# LOLv2-syn
python measure.py --lol_v2_syn

# Sony-Total-Dark
python measure_SID_blur.py --SID

# LOL-Blur
python measure_SID_blur.py --Blur

# SICE-Grad
python measure.py --SICE_grad

# SICE-Mix
python measure.py --SICE_mix


# five unpaired datasets DICM, LIME, MEF, NPE, VV. 
# You can change "--DICM" to the other unpaired datasets "LIME, MEF, NPE, VV".
python measure_niqe_bris.py --DICM


# Note: Following LLFlow, KinD, and Retinxformer, we have also adjusted the brightness of the output image produced by the network, based on the average value of GroundTruth (GT). This only works in paired datasets. If you want to measure it, please add "--use_GT_mean".
# 
# e.g.
python measure.py --lol --use_GT_mean
  
python net_test.py

3. Training 🌓

python train.py

4. Contacts 🌔

If you have any questions, please contact us or submit an issue to the repository!

Yixu Feng (yixu-nwpu@mail.nwpu.edu.cn) Cheng Zhang (zhangcheng233@mail.nwpu.edu.cn)

5. Citation 🌕

If you find our work useful for your research, please cite our paper

@misc{feng2024hvi,
      title={You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement}, 
      author={Yixu Feng and Cheng Zhang and Pei Wang and Peng Wu and Qingsen Yan and Yanning Zhang},
      year={2024},
      eprint={2402.05809},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}