Home

Awesome

<!-- * @Descripttion: * @version: * @Author: Jinlong Li CSU PhD * @Date: 2024-07-10 20:59:10 * @LastEditors: Jinlong Li CSU PhD * @LastEditTime: 2024-08-20 14:22:25 -->

LightDiff: Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving (CVPR 2024)

paper supplement

<!-- [![video](https://img.shields.io/badge/Video-Presentation-F9D371)]() -->

This is the official implementation of CVPR2024 paper Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving".

Jinlong Li<sup>1*</sup>, Baolu Li<sup>1*</sup>,Zhengzhong Tu<sup>2</sup>, Xinyu Liu<sup>1</sup>, Qing Guo<sup>3</sup>, Felix Juefei-Xu<sup>4</sup>, Runsheng Xu<sup>5</sup>, Hongkai Yu<sup>1</sup>

<sup>1</sup>Cleveland State University, <sup>2</sup>University of Texas at Austin, <sup>3</sup>A*STAR, <sup>4</sup>New York University, <sup>5</sup>UCLA

Computer Vision and Pattern Recognition (CVPR), 2024

Project Page <br>

teaser

Getting Started

Environment Setup

conda env create -f environment.yml
conda activate lightdiff

Note: you can first install the environment of BEVDepth, after you successful install it, then you can install the environment of ControlNet.

Model Training

The training code is in "train.py" and the dataset code in "", which are actually surprisingly simple as follow with ControlNet. you need to set path in these python files.

python train.py

Model testing

python test.py   # using config file in ./models/lightdiff_v15.yaml

Image Quality Evaluation

You need to set path in "image_noreference_score.py".

python image_noreference_score.py

DATA Preparation

The directory will be as follows.

── nuScenes
│   ├── maps
│   ├── samples
│   ├── sweeps
│   ├── v1.0-test
|   ├── v1.0-trainval

Training set

We select all 616 daytime scenes of the nuScenes training set containing total 24,745 camera front images as our training set.

Testing set

We select all 15 nighttime scenes in the nuScenes validation set containing total 602 camera front images are as our testing set. For your convenience, you can download the data from validation set.

Multi-modality Data Generation

Instruction prompt

We obtain instruction prompts by LENS.

Depth map

We obtain depth map for training and testing images by High Resolution Depth Maps.

Corresponding degraded dark light image for Training Set

We generate corresponding degraded dark light image in the training stage based on code from the ICCV_MAET, which is integrated into the data process in the training stage.

Althrough the degraded images may not precisely replicate the authentic appearance of real nighttime, our synthesized data distribution (t-SNE) is much closer to real nighttime compared to real daytime, as shown below:

<!-- ![teaser](/images/SNE.png) --> <img src="./images/SNE.png" alt="Image description" width="900" style="display: block; margin: 0 auto;">

Citation

If you are using our wokr for your research, please cite the following paper:

@inproceedings{li2024light,
 title={Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving},
 author={Li, Jinlong and Li, Baolu and Tu, Zhengzhong and Liu, Xinyu and Guo, Qing and Juefei-Xu, Felix and Xu, Runsheng and Yu, Hongkai},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 pages={15205--15215},
 year={2024}
}

Acknowledgment

This code is modified based on the code ControlNet-v1-1-nightly and BEVDepth. Thanks.