Awesome
Detect Globally, Refine Locally: A Novel Approach to Saliency Detection (DGRL)
This package has the source code for the paper "Detect Globally, Refine Locally: A Novel Approach to Saliency Detection" (CVPR18).
Paper link
- The paper can be found in Baidu drive or Google drive.
How to use
Train
- For Global Localizaiton Network (GLN), using the code in
./stage1/train/
for training. The example images are given in./stage1/train/data/
. Download the initialized model from Baidu drive or Google drive. - For Boundary Refinement Network (BRN), using the code in
./stage2/train/
for training. After finishing the training process of the GLN, then run the code in./stage1/test/test.m
to generate saliency maps of GLN. The examples images are given in./stage2/train/data/
. Each image in the training set, including the saliency map generated by the GLN, the original RGB image, the ground truth, should be resize to 480 * 480 by the 'nearest' method. Using the code in./stage2/train/init/generate_train.m
to generate the initialization model.
Test
- Download our trained model from Baidu drive or Google drive.
- Run
./stage1/test/test.m
to generate saliency maps of Global Localizaiton Network (GLN). - Run
./stage2/test/test.m
to generate saliency maps of Boundary Refinement Network (BRN). The saliency maps of GLN will serve as the input of BRN.
Download
The saliency maps on 10 datasets including ECSSD, PASCAL-S, SOD, SED1, SED2, MSRA, DUT-OMRON, THUR15K, HKU-IS and DUTS can be found in the following links.
GLN: Baidu drive or Google drive.
BRN: Baidu drive or Google drive.
Cite this work
If you find this work useful in your research, please consider citing:
@inproceedings{wang2018detect,
title={Detect Globally, Refine Locally: A Novel Approach to Saliency Detection},
author={Wang, Tiantian and Zhang, Lihe and Wang, Shuo and Lu, Huchuan and Yang, Gang and Ruan, Xiang and Borji, Ali},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={3127--3135},
year={2018}
}