Home

Awesome

[ECCV 2024] GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval [Paper]

<h4 align="center">Han Zhou<sup>1,*</sup>, Wei Dong<sup>1,*</sup>, Xiaohong Liu<sup>2,&dagger;</sup>, Shuaicheng Liu<sup>3</sup>, Xiongkuo Min<sup>2</sup>, Guangtao Zhai<sup>2</sup>, Jun Chen<sup>1,&dagger;</sup></center> <h4 align="center"><sup>1</sup>McMaster University, <sup>2</sup>Shanghai Jiao Tong University, <h4 align="center"><sup>3</sup>University of Electronic Science and Technology of China</center></center> <h4 align="center"><sup>*</sup>Equal Contribution, <sup>&dagger;</sup>Corresponding Authors</center></center>

PWC PWC PWC

Introduction

This repository represents the official implementation of our ECCV 2024 paper titled GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.

License

We present GLARE, a novel network for low-light image enhancement.

Overall Framework

teaser

📢 News

2024-12-19: Train code will be released in one week. ⭐ <br> 2024-09-25: Our another paper ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction has been accepted by NeurIPS 2024. Code and pre-print will be released at: <a href="https://github.com/LowlevelAI/ECMamba"><img src="https://img.shields.io/github/stars/LowlevelAI/ECMamba"/></a>. :rocket:<br> 2024-09-21: Inference code for unpaired images and pre-trained models for LOL-v2-real is released! :rocket:<br> 2024-07-21: Inference code and pre-trained models for LOL is released! Feel free to use them. ⭐ <br> 2024-07-21: License is updated to Apache License, Version 2.0. 💫 <br> 2024-07-19: Paper is available at: <a href="https://arxiv.org/pdf/2407.12431"><img src="https://img.shields.io/badge/arXiv-PDF-b31b1b" height="16"></a>. :tada: <br> 2024-07-01: Our paper has been accepted by ECCV 2024. Code and Models will be released. :rocket:<br>

∞ TODO

🛠️ Setup

The inference code was tested on:

📦 Repository

Clone the repository (requires git):

git clone https://github.com/LowLevelAI/GLARE.git
cd GLARE

💻 Dependencies

🏃 Testing on benchmark datasets

📷 Download following datasets:

LOL Google Drive

LOL-v2 Google Drive

⬇ Download pre-trained models

Download pre-trained weights for LOL, pre-trained weights for LOL-v2-real and place them to folder pretrained_weights_lol, pretrained_weights_lol-v2-real, respectively.

🚀 Run inference

For LOL dataset

python code/infer_dataset_lol.py

For LOL-v2-real dataset

python code/infer_dataset_lolv2-real.py

For unpaired testing, please make sure the dataroot_unpaired in the .yml file is correct.

python code/infer_unpaired.py

You can find all results in results/. Enjoy!

🏋️ Training

Comming Soon~

✏️ Contributing

Please refer to this instruction.

🎓 Citation

Please cite our paper:

@InProceedings{Han_ECCV24_GLARE,
    author    = {Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},
    title     = {GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    year      = {2024}
}

@article{GLARE,
      title = {GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval}, 
      author = {Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},
      journal = {arXiv preprint arXiv:2407.12431},
      year = {2024}
}

🎫 License

This work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).

By downloading and using the code and model you agree to the terms in the LICENSE.

License