Home

Awesome

DMRA_RGBD-SOD

Code repository for our paper entilted "Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection" accepted at ICCV 2019 (poster).

Overall

avatar

The proposed Dataset

  1. Our DUTLF family consists of DUTLF-MV, DUTLF-FS, DUTLF-Depth.
  2. The dataset will be expanded to 4000 about real scenes.
  3. We are working on it and will make it publicly available soon.
  1. The dataset is part of DUTLF dataset captured by Lytro camera, and we selected a more accurate 1200 depth map pairs for more accurate RGB-D saliency detection.
  2. We create a large scale RGB-D dataset(DUTLF-Depth) with 1200 paired images containing more complex scenarios, such as multiple or transparent objects, similar foreground and background, complex background, low-intensity environment. This challenging dataset can contribute to comprehensively evaluating saliency models.

avatar

DMRA Code

> Requirment

> Usage

1. Clone the repo

git clone https://github.com/jiwei0921/DMRA.git
cd DMRA/

2. Train/Test

python demo.py
python demo.py

> Training info and pre-trained models for DMRA

To better understand, we retrain our network and record some detailed training details as well as corresponding pre-trained models.

IterationsLossNJUD(F-measure)NJUD(MAE)NLPR(F-measure)NLPR(MAE)download link
100W9580.8820.0480.8670.031link
70W24130.8760.0500.8540.033link
40W31940.8610.0560.8230.037link
16W82600.8050.0810.7250.056link
2W334940.0090.4700.0300.452link
0W45394-----

> Results

| DUTLF-Depth | | NJUD | | NLPR | | STEREO | | LFSD | | RGBD135 | | SSD |

> Related RGB-D Saliency Datasets

All common RGB-D Saliency Datasets we collected are shared in ready-to-use manner.

If you think this work is helpful, please cite

@inproceedings{piao2019depth,
  title={Depth-induced multi-scale recurrent attention network for saliency detection},
  author={Piao, Yongri and Ji, Wei and Li, Jingjing and Zhang, Miao and Lu, Huchuan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={7254--7263},
  year={2019}
}

Related SOTA RGB-D methods' results on our dataset

Meanwhile, we also provide other state-of-the-art RGB-D methods' results on our proposed dataset, and you can directly download their results (All results,2gs2).

No.Pub.NameTitleDownload
14ICCV2019DMRADepth-induced multi-scale recurrent attention network for saliency detectionresults, g7rz
13CVPR2019CPFPDepth-induced multi-scale recurrent attention network for saliency detectionresults, g7rz
12TIP2019TANetThree-stream attention-aware network for RGB-D salient object detectionresults, g7rz
11PR2019MMCIMulti-modal fusion network with multiscale multi-path and cross-modal interactions for RGB-D salient object detectionresults, g7rz
10ICME2019PDNetPdnet: Prior-model guided depth-enhanced network for salient object detectionresults, g7rz
09CVPR2018PCAProgressively Complementarity-Aware Fusion Network for RGB-D Salient Object Detectionresults, g7rz
08ICCVW2017CDCPAn innovative salient object detection using center-dark channel priorresults, g7rz
07TCyb2017CTMFCNNs-based RGB-D saliency detection via cross-view transfer and multiview fusionresults, g7rz
06TIP2017DFRGBD salient object detection via deep fusionresults, g7rz
05CAIP2017MBA Multilayer Backpropagation Saliency Detection Algorithm Based on Depth Miningresults, g7rz
04SPL2016DCMCSaliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusionresults, g7rz
03ECCV2014LHM-NLPRRgbd salient object detection: a benchmark and algorithmsresults, g7rz
02ICIP2014ACSDDepth saliency based on anisotropic center-surround differenceresults, g7rz
01ICIMCS2014DESDepth enhanced saliency detection methodresults, g7rz

Contact Us

If you have any questions, please contact us ( wji3@ualberta.ca or weiji.dlut@gmail.com ).