Awesome
Learning to Restore Hazy Video: A New Real-World Dataset and A New Method (CVPR2021)
The first REal-world VIdeo DEhazing dataset (REVIDE) for supervised training of video dehazing networks.
Updates
(2021.08.08) Releasing the indoor version of the REVIDE dataset.
Here is a TODO list in the near future:
- New official website
- Releasing the outdoor part of REVIDE
- Releasing the synthetic video dehazing dataset (REVIDE-SYN)
- The training and inference scripts
Abstract
Most of the existing deep learning-based dehazing methods are trained and evaluated on the image dehazing datasets, where the dehazed images are generated by only exploiting the information from the corresponding hazy ones. On the other hand, video dehazing algorithms, which can acquire more satisfying dehazing results by exploiting the temporal redundancy from neighborhood hazy frames, receive less attention due to the absence of the video dehazing datasets. Therefore, we propose the first REal-world VIdeo DEhazing (REVIDE) dataset which can be used for the supervised learning of the video dehazing algorithms. By utilizing a well-designed video acquisition system, we can capture paired real-world hazy and haze-free videos that are perfectly aligned by recording the same scene (with or without haze) twice. Considering the challenge of exploiting temporal redundancy among the hazy frames, we also develop a Confidence Guided and Improved Deformable Network (CG-IDN) for video dehazing. The experiments demonstrate that the hazy scenes in the REVIDE dataset are more realistic than the synthetic datasets and the proposed algorithm also performs favorably against state-of-the-art dehazing methods.
Paper & Dataset
Paper: OpenAccess
REVIDE-Indoor Dataset: Baidu Yun (Code:n6r8) | Google Drive
Official Link
The real world datasets for video restoration will be a long-term project in our team, and we will update the REVIDE dataset (including the REVIDE-Indoor, REVIDE-SYN, and more real world video restoration datasets) in our new official website (Coming soon).
Citation
If you use these models in your research, please cite:
@inproceedings{REVIDE,
author={Zhang, Xinyi and Dong, Hang and Pan, Jinshan and Zhu, Chao and Tai, Ying and Wang, Chengjie and Li, Jilin and Huang, Feiyue and Wang, Fei},
title= {Learning To Restore Hazy Video: A New Real-World Dataset and a New Method},
booktitle = {CVPR},
pages={9239--9248},
year = {2021}
}