Awesome
RAS
This code is for the paper "Reverse Attention for Salient Object Detection".pdf
Pytorch Version
A PyTorch version is available at here.
Citation
@inproceedings{chen2018eccv,
author={Chen, Shuhan and Tan, Xiuli and Wang, Ben and Hu, Xuelong},
booktitle={European Conference on Computer Vision},
title={Reverse Attention for Salient Object Detection},
year={2018}
}
@article{chen2020tip,
author={Chen, Shuhan and Tan, Xiuli and Wang, Ben and Lu, Huchuan and Hu, Xuelong and Fu, Yun},
journal={IEEE Transactions on Image Processing},
title={Reverse Attention Based Residual Network for Salient Object Detection},
volume={29},
pages={3763-3776},
year={2020}
}
Installing
- Install prerequisites for Caffe (http://caffe.berkeleyvision.org/installation.html#prequequisites).<br>
- Build DSS [1] with cuDNN v5.1 for acceleration. Supposing the root directory of DSS is
$DSS
.<br>
USE_CUDNN := 1
- Copy the folder RAS to
$DSS/example/
.<br>
Training
- Prepare training dataset and its corresponding data list.<br>
- Download the Pre-trained VGG model (VGG-16) and copy it to
$DSS/example/ras
.<br> - Change the dataset path in
$DSS/example/RAS/train.prototxt
.<br> - Run
solve.py
in shell (or you could use IDE like Eclipse).<br>
cd $DSS/example/RAS/
python solve.py
Testing
- Change the dataset path in
$DSS/example/RAS-tutorial_save.py
.<br> - Run
jupyter notebook RAS-tutorial_save.ipynb
.<br>
Evaluation
We use the code of [1] for evaluation.
Pre-trained RAS model
Pre-trained RAS model on MSRA-B: Baidu drive(h7qj) and Google drive.<br> Note that this released model is newly trained and is slightly different from the one reported in our paper.
Saliency Map
ECCV 2018: The saliency maps on 7 datasets are available at Baidu drive(zin5) and Google drive.<br> TIP 2020: The saliency maps on 6 datasets are available at Google drive.<br>
Reference
[1] Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections. In: CVPR. (2017) 5300–5309.