Awesome
DMRA_RGBD-SOD
Code repository for our paper entilted "Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection" accepted at ICCV 2019 (poster).
Overall
The proposed Dataset
- Dataset: DUTLF
- Our DUTLF family consists of DUTLF-MV, DUTLF-FS, DUTLF-Depth.
- The dataset will be expanded to 4000 about real scenes.
- We are working on it and will make it publicly available soon.
- Dataset: DUTLF-Depth
- The dataset is part of DUTLF dataset captured by Lytro camera, and we selected a more accurate 1200 depth map pairs for more accurate RGB-D saliency detection.
- We create a large scale RGB-D dataset(DUTLF-Depth) with 1200 paired images containing more complex scenarios, such as multiple or transparent objects, similar foreground and background, complex background, low-intensity environment. This challenging dataset can contribute to comprehensively evaluating saliency models.
- The dataset link can be found here. And we split the dataset including 800 training set and 400 test set.
DMRA Code
> Requirment
- pytorch 0.3.0+
- torchvision
- PIL
- numpy
> Usage
1. Clone the repo
git clone https://github.com/jiwei0921/DMRA.git
cd DMRA/
2. Train/Test
- test
Download related dataset link, and set the param '--phase' as "test" and '--param' as 'True' indemo.py
. Meanwhile, you need to set dataset path and checkpoint name correctly.
python demo.py
- train
Our train-augment dataset link [ fetch code haxl ] / train-ori dataset, and set the param '--phase' as "train" and '--param' as 'True'(loading checkpoint) or 'False'(no loading checkpoint) indemo.py
. Meanwhile, you need to set dataset path and checkpoint name correctly.
python demo.py
> Training info and pre-trained models for DMRA
To better understand, we retrain our network and record some detailed training details as well as corresponding pre-trained models.
Iterations | Loss | NJUD(F-measure) | NJUD(MAE) | NLPR(F-measure) | NLPR(MAE) | download link |
---|---|---|---|---|---|---|
100W | 958 | 0.882 | 0.048 | 0.867 | 0.031 | link |
70W | 2413 | 0.876 | 0.050 | 0.854 | 0.033 | link |
40W | 3194 | 0.861 | 0.056 | 0.823 | 0.037 | link |
16W | 8260 | 0.805 | 0.081 | 0.725 | 0.056 | link |
2W | 33494 | 0.009 | 0.470 | 0.030 | 0.452 | link |
0W | 45394 | - | - | - | - | - |
- Tips: The results of the paper shall prevail. Because of the randomness of the training process, the results fluctuated slightly.
> Results
| DUTLF-Depth | | NJUD | | NLPR | | STEREO | | LFSD | | RGBD135 | | SSD |
- Note: For evaluation, all results are implemented on this ready-to-use toolbox.
- SIP results: This is test results on SIP dataset, and fetch code is 'fi5h'.
> Related RGB-D Saliency Datasets
All common RGB-D Saliency Datasets we collected are shared in ready-to-use manner.
- The web link is here.
If you think this work is helpful, please cite
@inproceedings{piao2019depth,
title={Depth-induced multi-scale recurrent attention network for saliency detection},
author={Piao, Yongri and Ji, Wei and Li, Jingjing and Zhang, Miao and Lu, Huchuan},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={7254--7263},
year={2019}
}
Related SOTA RGB-D methods' results on our dataset
Meanwhile, we also provide other state-of-the-art RGB-D methods' results on our proposed dataset, and you can directly download their results (All results,2gs2).
No. | Pub. | Name | Title | Download |
---|---|---|---|---|
14 | ICCV2019 | DMRA | Depth-induced multi-scale recurrent attention network for saliency detection | results, g7rz |
13 | CVPR2019 | CPFP | Depth-induced multi-scale recurrent attention network for saliency detection | results, g7rz |
12 | TIP2019 | TANet | Three-stream attention-aware network for RGB-D salient object detection | results, g7rz |
11 | PR2019 | MMCI | Multi-modal fusion network with multiscale multi-path and cross-modal interactions for RGB-D salient object detection | results, g7rz |
10 | ICME2019 | PDNet | Pdnet: Prior-model guided depth-enhanced network for salient object detection | results, g7rz |
09 | CVPR2018 | PCA | Progressively Complementarity-Aware Fusion Network for RGB-D Salient Object Detection | results, g7rz |
08 | ICCVW2017 | CDCP | An innovative salient object detection using center-dark channel prior | results, g7rz |
07 | TCyb2017 | CTMF | CNNs-based RGB-D saliency detection via cross-view transfer and multiview fusion | results, g7rz |
06 | TIP2017 | DF | RGBD salient object detection via deep fusion | results, g7rz |
05 | CAIP2017 | MB | A Multilayer Backpropagation Saliency Detection Algorithm Based on Depth Mining | results, g7rz |
04 | SPL2016 | DCMC | Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion | results, g7rz |
03 | ECCV2014 | LHM-NLPR | Rgbd salient object detection: a benchmark and algorithms | results, g7rz |
02 | ICIP2014 | ACSD | Depth saliency based on anisotropic center-surround difference | results, g7rz |
01 | ICIMCS2014 | DES | Depth enhanced saliency detection method | results, g7rz |
- Thanks for related authors to provide the code or results, particularly, Deng-ping Fan, Hao Chen, Chun-biao Zhu, etc.
Contact Us
If you have any questions, please contact us ( wji3@ualberta.ca or weiji.dlut@gmail.com ).