Awesome
UVE-38K
- A GIF image is just a small clip of a video.
<img src="./imgs/marine_ranching_4-raw.gif" width="400"> <img src="./imgs/marine_ranching_4-ref.gif" width="400">
<center>shark</center><img src="./imgs/shark-raw.gif" width="400"> <img src="./imgs/shark-ref.gif" width="400">
<center>dive</center><img src="./imgs/dive-raw.gif" width="400"> <img src="./imgs/dive-ref.gif" width="400">
<center>cuttlefish</center><img src="./imgs/cuttlefish-raw.gif" width="400"> <img src="./imgs/cuttlefish-ref.gif" width="400">
<center>coral</center><img src="./imgs/coral-raw.gif" width="400"> <img src="./imgs/coral-ref.gif" width="400">
Overview
The UVE-38K is a large real-world underwater video enhancement dataset with inter-frame consistent refference. The raw underwater videos are collected from the Dive+ community and the underwater object detection dataset of Underwater Robot Picking Contest (URPC). We adopt the following 12 enhancement methods to generate the enhancement candidate pool for each frame i.i., i.e., CLAHE [1], DCP [2], Fusion-based method [3], GBdehaze [4], GC, ICM [5], MIP [6], RGHS [7], ROWS [8], UCM [9] and WaterNet [10], Dive+ [11], and invited volunteers to pick out the best enhancement approach for each video according to their overall performance on the whole video. The videos whose intermediate references include obvious style difference will be applied for further post-refinement to get more consistent and better reference. It includes:
- 50 video sequences with more than 38,000 frames
- A variety of different resolution sizes, over half videos have larger resolution than 720P
- 7 main scenes and objects and others
References
[1] K. Zuiderveld, “Contrast limited adaptive histogram equalization,” Graphics Gems, pp. 474–485, 1994.
[2] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011.
[3] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 81–88.
[4] C. Li, J. Quo, Y. Pang, S. Chen, and J. Wang, “Single underwater image restoration by blue-green channels dehazing and red channel correction,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2016, pp. 1731–1735.
[5] I. Kashif, R. A. Salam, O. Azam, and A. Z. Talib, “Underwater image enhancement using an integrated colour model,” Iaeng International Journal of Computer Science, vol. 34, no. 2, pp. 239–244, 2007.
[6] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in OCEANS 2010 MTS/IEEE SEATTLE, 2010, pp. 1–8.
[7] D. Huang, Y. Wang, W. Song, J. Sequeira, and S. Mavromatis, “Shallow-water image enhancement using relative global histogram stretchingbased on adaptive parameter acquisition,” inInternational conferenceon multimedia modeling. Springer, 2018, pp. 453–465.
[8] Liu Chao and Meng Wang, “Removal of water scattering,” in International Conference on Computer Engineering and Technology, vol. 2, 2010, pp. V2–35–V2–39.
[9] K. Iqbal, M. Odetayo, A. James, Rosalina Abdul Salam, and Abdullah Zawawi Hj Talib, “Enhancing the low quality images using unsupervised colour correction method,” in IEEE International Conference on Systems, Man and Cybernetics, 2010, pp. 1703–1709.
[10] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Transactions on Image Processing, vol. 29, pp. 4376–4389, 2020.
[11] https://diveplus.cn/app
Downloads
-
Part Ⅰ Baidu Cloud (Extraction code: 9254) Google Drive
-
The whole datasets [Baidu Cloud](Extraction code: uved)
Contributors
Yongchang Zhang, Kunqian Li, Qi Qi, Shaobao Hu and Fei Tian from Ocean University of China.
Note
The whole dataset is available on request from the authors. If you find this dataset helpful, please cite the following works.
@article{qi2021underwater,
title={Underwater image co-enhancement with correlation feature matching and joint learning},
author={Qi, Qi and Zhang, Yongchang and Tian, Fei and Wu, QM Jonathan and Li, Kunqian and Luan, Xin and Song, Dalei},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2021},
publisher={IEEE}
}
@article{qi2022sguie,
title={SGUIE-Net: Semantic Attention Guided Underwater Image Enhancement with Multi-Scale Perception},
author={Qi, Qi and Li, Kunqian and Zheng, Haiyong and Gao, Xiang and Hou, Guojia and Sun, Kun},
journal={arXiv preprint arXiv:2201.02832},
year={2022}
}