Awesome
USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection
Source code and dataset for our paper “USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection” by Lin Hong, Xin Wang, Gan Zhang, and Ming Zhao. IEEE TIP 2023
Created by Lin Hong, email: 20B953023@stu.hit.edu.cn or lin.hong@tum.de
USOD10K dataset
Baidu Netdisk: USOD10K fetch code: [good] &&& Google drive: USOD10K is the first large-scale dataset for Underwater Salient Object Detection (USOD). It is free for academic research, not for any commercial purposes.
Note: for practical training and reliable test results of deep methods on the USOD10K dataset, there should be enough samples of each category on the training set, validation set (training set and validation set are merged in TC-USOD baseline), and test set. Hence we follow the USOD10K split of roughly 7:2:1. Its folder looks like this:
Data
|-- USOD10K
| |-- USOD10K-TR
| |-- |-- USOD10K-TR-RGB
| |-- |-- USOD10K-TR-GT
| |-- |-- USOD10K-TR-depth
| |-- |-- USOD10K-TR-Boundary
| |-- USOD10K-Val
| |-- |-- USOD10K-Val-RGB
| |-- |-- USOD10K-Val-GT
| |-- |-- USOD10K-Val-depth
| |-- |-- USOD10K-Val-Boundary
| |-- USOD10K-TE
| |-- |-- USOD10K-TE-RGB
| |-- |-- USOD10K-TE-GT
| |-- |-- USOD10K-TE-depth
TC-USOD baseline
The TC-USOD baseline is simple but strong, it adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively.
How to generate predicted saliency maps by yourself or retrain this model: You create a folder named checkpoint under the TU_USOD folder (cd TC_USOD->mkdir checkpoint) and put the TC-USOD baseline fetch code: [ie0k] in it to generate the predicted saliency maps (you can also find them in the TC_USOD/preds/USOD10K in this project). Of course, you can retrain this method with the available USOD10K dataset to get your own model.
Requirement
- Python 3.8
- Pytorch 1.6.0
- Torchvison 0.7.0
Benchmark
We retrained 35 SOTA methods in the fields of SOD and USOD, most of the deep methods are proposed in the years 2020, 2021, and 2022. It takes us about 1750 hours to retrain these methods. Here is the qualitative evaluation of the 35 SOTA methods and the TC-USOD baseline.
(1) Retrained models are available benchmark_pth fetch code: [usod]
(2) Predicted saliency maps USOD10K_predictions fetch code: [usod]
(3) Predicted saliency maps USOD_predictions fetch code: [usod]
(4) Evaluation results fetch code: [usod]
Bibliography entry
If you think our work is helpful, please cite
@ARTICLE{10102831,
author={Hong, Lin and Wang, Xin and Zhang, Gan and Zhao, Ming},
journal={IEEE Transactions on Image Processing},
title={USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection},
year={2023},
volume={},
number={},
pages={1-1},
doi={10.1109/TIP.2023.3266163}}
SOD dataset
(1) NJUD [baidu pan fetch code: 7mrn | Google drive]
(2) NLPR [baidu pan fetch code: tqqm | Google drive]
(3) DUTLF-Depth [baidu pan fetch code: 9jac | Google drive]
(4) STERE [baidu pan fetch code: 93hl | Google drive]
(5) LFSD [baidu pan fetch code: l2g4 | Google drive]
(6) RGBD135 [baidu pan fetch code: apzb | Google drive]
(7) SSD [baidu pan fetch code: j3v0 | Google drive]
(8) SIP [baidu pan fetch code: q0j5 | Google drive]
Acknowledgement
We thank the authors of VST for providing T2T-ViT backbone, the authors of DPT for providing us the method to get estimated depth maps of single underwater images in USOD10K, the authors of SVAM-Net for providing the USOD dataset, and Zhao Zhang for providing the efficient evaluation tool.
Note to active participants
To spark research in the USOD research community, we discuss several potential use cases and applications of the USOD10K dataset and the USOD methods in the paper, and highlight some promising research directions for this young but challenging field.
We hope our work will boost the development of USOD research. However, as a young research field, USOD is still far from being solved, leaving large room for further improvement !!!