Awesome
<!-- <img src="https://github.com/Srameo/DNF/assets/7869323/ed7b8296-fe3c-48f9-bad3-f777a1b80c0b" alt="23CVPR-DNF-pipeline" width="704px"> -->DNF: Decouple and Feedback Network for Seeing in the Dark
This repository contains the official implementation of the following paper:
DNF: Decouple and Feedback Network for Seeing in the Dark<br/> Xin Jin<sup>*</sup>, Ling-Hao Han<sup>*</sup>, Zhen Li, Zhi Chai, Chunle Guo, Chongyi Li<br/> (* denotes equal contribution.)<br/> In CVPR 2023
[Paper] [Google Drive] [Homepage (TBD)] [Video (TBD)]
<img src="https://github.com/Srameo/DNF/assets/7869323/01633049-930c-4149-ba04-3d89faa05b69" alt="23CVPR-DNF-example-2" width="768px">News
Future work can be found in todo.md.
- May, 2023: Our code is publicly available.
- Mar, 2023: Excited to announce that our paper was selected as CVPR 2023 Highlight⨠(10% of accepted papers, 2.5% of submissions)!
- Feb, 2023: Our paper "DNF: Decouple and Feedback Network for Seeing in the Dark" has been accepted by CVPR 2023.
- Apr, 2022: A single-stage version of our network has won the third place in NTIRE 2022 Night Photography Challenge.
Dependencies and Installation
- Clone Repo
git clone https://github.com/Srameo/DNF.git CVPR23-DNF
- Create Conda Environment and Install Dependencies
conda create -n dnf python=3.7.11 conda activate dnf pip install -r requirements.txt -f https://download.pytorch.org/whl/cu111/torch_stable.html
- Download pretrained models from Pretrained Models, and put them in the pretrained folder.
Pretrained Models
<table> <thead> <tr> <th> Trained on </th> <th> :link: Download Links </th> <th> Config file </th> <th> CFA Pattern </th> <th> Framework </th> </tr> </thead> <tbody> <tr> <td>SID Sony</td> <th> [<a href="https://drive.google.com/file/d/1FHreF_UHFutkiQ0LMdWjX2fahznka0Cb/view?usp=share_link">Google Drive</a>][<a href="https://pan.baidu.com/s/1-r29zUvCS-Wa2wEYovX89g?pwd=eoiz">Baidu Cloud</a>] </th> <th> [<a href="configs/cvpr/sony/baseline.yaml">configs/cvpr/sony/baseline</a>] </th> <th> Bayer (RGGB) </th> <th> DNF </th> </tr> <tr> <td>SID Fuji</td> <th> [<a href="https://drive.google.com/file/d/1WfwZLBbj0EUf_QTYS8Qq5Rzk2iV8QKQ7/view?usp=share_link">Google Drive</a>][<a href="https://pan.baidu.com/s/1Sz30vAfVfF0gymNgjEUqMw?pwd=biqo">Baidu Cloud</a>]</th> <th> [<a href="configs/cvpr/fuji/baseline.yaml">configs/cvpr/fuji/baseline</a>] </th> <th> X-Trans </th> <th> DNF </th> </tr> <tr> <td>MCR</td> <th> [<a href="https://drive.google.com/file/d/1kFYnqJTYfYkRWcojGxgV9DpVup4uFFBR/view?usp=share_link">Google Drive</a>][<a href="https://pan.baidu.com/s/18CjvaJZ1YtrTa_YUnQo8Vg?pwd=tkbz">Baidu Cloud</a>] </th> <th> [<a href="configs/cvpr/mcr/baseline.yaml">configs/cvpr/mcr/baseline</a>] </th> <th> Bayer (RGGB) </th> <th> DNF </th> </tr> </tbody> </table>Quick Demo
Try DNF on your own RAW images (with RGGB Bayer pattern) !
- Download the pretrained DNF (trained on SID Sony subset) into
[PATH]
. - Remember the directory of your own images as
[DIR]
.If you would like to speed up, you could process the RAW image with
.ARW
postfix into numpy array follow theConvert Your Own RAW Images to Numpy for Acceleration
section in demo.md.
And add-a
option in command. - Try DNF on your images!
bash demos/images_process.sh -p [PATH] -d [DIR] -r [RATIO] # [RATIO] denotes the additional digital gain you would like to add on your images. # If your data lies in '.npy' format, you should add '-a' arguments. bash demos/images_process.sh -p [PATH] -d [DIR] -r [RATIO] -a # for data in numpy format # Let's see a simple example. bash demos/images_process.sh -p pretrained/dnf_sony.pth -d dataset/sid/Sony/short_pack -r 100 # The above command would try our pretrained DNF on the SID Sony subset with additional digital gain 100.
- Check your results in
runs/CVPR_DEMO/image_demo/results/inference
!
Try DNF on your own RAW video clips (with RGGB Bayer pattern) !
- Download the pretrained DNF (trained on SID Sony subset) into
[PATH]
. - Preprocess your RAW video clip and save each frame into
[DIR]
with.npy
format.You could follow the steps in
Convert Your Own Video
section of demo.md. - Try DNF on your video clip!
bash demos/video_process.sh -d [DIR] -p [PATH] -r [RATIO] -s [SAVE_PATH] -f [FILE_NAME] # [RATIO] denotes the additional digital gain you would like to add on your images, Default: 50. # [SAVE_PATH] and [FILE_NAME] determine where to save the result. # Let's see a simple example. bash demos/video_process.sh \ -d dataset/campus/short_pack \ -p pretrained/dnf_sony.pth \ -r 50 \ -s runs/videos -f campus # The above command would result in a 24 fps video postprocessed by our DNF with additional digital gain 50.
- Check result in
[SAVE_PATH]/[FILE_NAME].mp4
.
Try with data provided by us
Please refer to demo.md to learn how to download the provided data and how to inference.
Training and Evaluation
Please refer to benchmark.md to learn how to benchmark DNF, how to train a new model from scratch.
<b style='color:red'>Attention!</b> Due to the presence of three misaligned images in the SID Sony dataset (with scene IDs 10034, 10045, and 10172), our testing results in the article are based on excluding these images from the dataset. The txt file used for testing can be found and downloaded from the Google Drive (Sony_new_test_list.txt
).
Citation
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{jincvpr23dnf,
title={DNF: Decouple and Feedback Network for Seeing in the Dark},
author={Jin, Xin and Han, Linghao and Li, Zhen and Chai, Zhi and Guo, Chunle and Li, Chongyi},
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}
License
This code is licensed under the Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Please note that any commercial use of this code requires formal permission prior to use.
Contact
For technical questions, please contact xjin[AT]mail.nankai.edu.cn
and lhhan[AT]mail.nankai.edu.cn
.
For commercial licensing, please contact cmm[AT]nankai.edu.cn
Acknowledgement
This repository borrows heavily from BasicSR and Learning-to-See-in-the-Dark.