Home

Awesome

night_enhancement (ECCV'2022)

Introduction

This is an implementation of the following paper.

Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression
European Conference on Computer Vision (ECCV'2022)

Yeying Jin, Wenhan Yang and Robby T. Tan

[Paper] [Supplementary] arXiv [Poster] [Slides] [Link] [Video]

PWC šŸ”„ReplicatešŸ”„

Prerequisites, or follow bilibili

git clone https://github.com/jinyeying/night-enhancement.git
cd night-enhancement/
conda create -n night python=3.7
conda activate night
conda install pytorch=1.10.2 torchvision torchaudio cudatoolkit=11.3 -c pytorch
python3 -m pip install -r requirements.txt

Datasets

Light-Effects Suppression on Night Data

  1. Light-effects data [Dropbox] | [BaiduPan (code:self)] <br> Light-effects data is collected from Flickr and by ourselves, with multiple light colors in various scenes. <br>
<p align="left"> <img width=950" src="teaser/self-collected.png"> </p>
  1. LED data [Dropbox] | [BaiduPan (code:ledl)] <br> We captured images with dimmer light as the reference images.
<p align="left"> <img width=350" src="teaser/LED.PNG"> </p>
  1. GTA5 nighttime fog [Dropbox] | [BaiduPan (code:67ml)] <br> Synthetic GTA5 nighttime fog data.<br>
<p align="left"> <img width=350" src="teaser/GTA5.PNG"> </p>
  1. Syn-light-effects [Dropbox] | [BaiduPan (code:synt)] <br> Synthetic-light-effects data is the implementation of the paper:<br>
glow_rendering_code/repro_ICCV2007_Fig5.m
<p align="left"> <img width=350" src="teaser/syn.PNG"> </p>

1. Low-Light Enhancement:

Pre-trained Model

  1. Download the pre-trained LOL model [Dropbox] | [BaiduPan (code:lol2)], put in ./results/LOL/model/
  2. Put the test images in ./LOL/

Low-light Enhancement Test

šŸ”„ReplicatešŸ”„ Online test: https://replicate.com/cjwbw/night-enhancement

<p align="left"> <img width="750" src="teaser/lowlight_enhance.png"> </p>
python main.py

Low-light Enhancement Train

  1. Download Low-Light Enhancement Dataset

1.1 LOL dataset <br> "Deep Retinex Decomposition for Low-Light Enhancement", BMVC, 2018. [Baiduyun (code:sdd0)] | [Google Drive] <br>

1.2 LOL_Cap dataset <br> "Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement", TIP, 2021. [Baiduyun (code:l9xm)] | [Google Drive] <br>

|-- LOL_Cap
    |-- trainA ## Low 
    |-- trainB ## Normal
    |-- testA  ## Low 
    |-- testB  ## Normal

  1. There is no decomposition, light-effects guidance for low-light enhancement.
CUDA_VISIBLE_DEVICES=1 python main.py --dataset LOL --phase train --datasetpath /home1/yeying/data/LOL_Cap/
<p align="left"> <img width="750" src="teaser/lowlight_enhancement.PNG"> </p>

Low-light Enhancement Results

<p align="left"> <img width="750" src="teaser/lowlight.PNG"> </p>
  1. LOL-test Results (15 test images) [Dropbox] | [BaiduPan (code:lol1)]<br>

Get the following Table 3 in the main paper on the LOL-test dataset.

LearningMethodPSNRSSIM
Unsupervised LearningOurs21.5210.7647
N/AInput7.7730.1259
<p align="left"> <img width="350" src="teaser/LOL.PNG"> </p>
  1. LOL_Cap Results (100 test images) [Dropbox] | [BaiduPan (code:lolc)]<br>

Get the following Table 4 in the main paper on the LOL-Real dataset.

LearningMethodPSNRSSIM
Unsupervised LearningOurs25.510.8015
N/AInput9.720.1752
<p align="left"> <img width="350" src="teaser/LOL_real.PNG"> </p>

Re-train (train from scratch) in LOL_V2_real (698 train images), and test on LOL_V2_real [Dropbox] | [BaiduPan (code:lol2)].<br> PSNR: 20.85 (vs EnlightenGAN's 18.23), SSIM: 0.7243 (vs EnlightenGAN's 0.61).

2. Light-Effects Suppression:

Pre-trained Model

  1. Download the pre-trained de-light-effects model [Dropbox] | [BaiduPan (code:dele)], put in ./results/delighteffects/model/
  2. Put the test images in ./light-effects/

Light-effects Suppression Test

python main_delighteffects.py

Decomposition1

<p align="left"> <img width="350" src="teaser/demo1.png"> </p>

Inputs are in ./light-effects/, Outputs are in ./light-effects-output/. <br> Inputs and Outputs are trainA and trainB for the translation network.

demo_all.ipynb
python demo.py
<p align="left"> <img width="950" src="teaser/light_effects.PNG"> </p>

Decomposition2

<p align="left"> <img width="350" src="teaser/demo2.png"> </p>

Inputs are in ./light-effects/, Outputs are in ./light-effects-output/DSC01065/. <br> Inputs and Outputs are trainA and trainB for the translation network.

python demo_separation.py --img_name DSC01065.JPG 

Decomposition3

demo_decomposition.m

Inputs and Initial Background Results are trainA and trainB for the translation network.

Initial Background Results [Dropbox]Light-Effects Results [Dropbox]Shading Results [Dropbox]
[BaiduPan (code:jjjj)][BaiduPan (code:lele)][BaiduPan (code:llll)]
<p align="left"> <img width="350" src="teaser/decomposition.png"> </p>

Light-effects Suppression Train

CUDA_VISIBLE_DEVICES=1 python main.py --dataset delighteffects --phase train --datasetpath /home1/yeying/data/light-effects/

Feature Results:

  1. Run the MATLAB code to adaptively fuse the three color channels, and output I_gray.
checkGrayMerge.m
<p align="left"> <img width="350" src="VGG_code/results_VGGfeatures/DSC01607_I_GrayBest.png"> </p>
  1. Download the fine-tuned VGG model [Dropbox] | [BaiduPan (code:dark)] (fine-tuned on ExDark), put in ./VGG_code/ckpts/vgg16_featureextractFalse_ExDark/nets/model_best.tar

  2. Obtain structure features.

python test_VGGfeatures.py

Summary of Comparisons:

<p align="left"> <img width="350" src="teaser/comparison.png"> </p>

License

The code and models in this repository are licensed under the MIT License for academic and other non-commercial uses.<br> For commercial use of the code and models, separate commercial licensing is available. Please contact:

Acknowledgments

The decomposition code is implemented based on DoubleDIP, Layer Seperation and LIME.<br> The translation code is implemented based on U-GAT-IT, we would like to thank them. <br> One trick used in networks.py is to change out = self.UpBlock2(x) to out = (self.UpBlock2(x)+input).tanh() to learn a residual.

Citations

If this work is useful for your research, please cite our paper.

@inproceedings{jin2022unsupervised,
  title={Unsupervised night image enhancement: When layer decomposition meets light-effects suppression},
  author={Jin, Yeying and Yang, Wenhan and Tan, Robby T},
  booktitle={European Conference on Computer Vision},
  pages={404--421},
  year={2022},
  organization={Springer}
}

@inproceedings{jin2023enhancing,
  title={Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution},
  author={Jin, Yeying and Lin, Beibei and Yan, Wending and Yuan, Yuan and Ye, Wei and Tan, Robby T},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={2446--2457},
  year={2023}
}

If light-effects data is useful for your research, please cite the paper.

@inproceedings{sharma2021nighttime,
	title={Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects},
	author={Sharma, Aashish and Tan, Robby T},
	booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
	pages={11977--11986},
	year={2021}
}