Awesome
Zero-DCE TF
The Tensorflow Implementation of the Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement - CVPR 2020
Update:
I pushed my project to Google Cloud Platform. May need more improvement. Should you have any comment or inquiries or just basically want to enhance your images, give it a try here
Content
Getting Started
- Clone the repository
Prerequisites
- Tensorflow 2.2.0+
- Python 3.6+
- Keras 2.3.0
- PIL
- numpy
pip install -r requirements.txt
Running
Training
-
Preprocess
-
Download the training data at Google Drive.
-
Run this file to generate data. (Please remember to change path first)
python src/prepare_data.py
-
-
Train ZERO_DCE
python train.py
-
Test ZERO_DCE
python test.py
Usage
Training
python train.py [-h] [--lowlight_images_path LOWLIGHT_IMAGES_PATH] [--lr LR]
[--num_epochs NUM_EPOCHS] [--train_batch_size TRAIN_BATCH_SIZE]
[--val_batch_size VAL_BATCH_SIZE] [--display_iter DISPLAY_ITER]
[--checkpoint_iter CHECKPOINT_ITER] [--checkpoints_folder CHECKPOINTS_FOLDER]
[--load_pretrain LOAD_PRETRAIN] [--pretrain_dir PRETRAIN_DIR]
optional arguments: -h, --help show this help message and exit
--lowlight_images_path LOWLIGHT_IMAGES_PATH
--lr LR
--num_epochs NUM_EPOCHS
--train_batch_size TRAIN_BATCH_SIZE
--val_batch_size VAL_BATCH_SIZE
--display_iter DISPLAY_ITER
--checkpoint_iter CHECKPOINT_ITER
--checkpoints_folder CHECKPOINTS_FOLDER
--load_pretrain LOAD_PRETRAIN
--pretrain_dir PRETRAIN_DIR
Testing
python test.py [-h] [--lowlight_test_image_path]
optional arguments: -h, --help show this help message and exit
--lowlight_test_image_path LOWLIGHT_TEST_IMAGES_PATH
Video-DCE
Video-DCE is a simple adaptation of the lowlight image enhancement script to take videos as input, processing individual frames with the model, then outputing a HuffYUV encoded video while copying the existing audio track from the input video. Unlike the image processing scripts, Video-DCE will not resize the input video frames in order to process them. The output video will mimic the input's video FPS, but you can specify a different Display Aspect Ratio if needed.
usage: video-dce.py [-h] --input_video INPUT_VIDEO [--output_video OUTPUT_VIDEO] [--max_frames MAX_FRAMES] [--dar DAR]
MAX_FRAMES
can be useful for testing results out of a portion of the video. Frame counting will always start from zero.
The script has only been tested with SD resolution videos (720x486, 640x480 and 720x480) as it's my main use case, so there might be bugs depending on the resolution of your input video.
Result
input | output |
License
This project is licensed under the MIT License - see the LICENSE file for details
References
[1] Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement - CVPR 2020 link
[3] Low-light dataset - link
Citation
@misc{guo2020zeroreference,
title={Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement},
author={Chunle Guo and Chongyi Li and Jichang Guo and Chen Change Loy and Junhui Hou and Sam Kwong and Runmin Cong},
year={2020},
eprint={2001.06826},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Try on web
The project is now available on GCP. Give it a try
Acknowledgments
- This repo is the re-production of the original pytorch version
- Thank you for helping me to understand more about pains that tensorflow may cause.
- Final words:
- Any ideas on updating or misunderstanding, please send me an email: vovantu.hust@gmail.com
- If you find this repo helpful, kindly give me a star.