Awesome
R<sup>3</sup>Net: Recurrent Residual Refinement Network for Saliency Detection
by Zijun Deng, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Jing Qin, Guoqiang Han, and Pheng-Ann Heng [paper link]
This implementation is written by Zijun Deng at the South China University of Technology.
Citation
@inproceedings{deng18r,
author = {Deng, Zijun and Hu, Xiaowei and Zhu, Lei and Xu, Xuemiao and Qin, Jing and Han, Guoqiang and Heng, Pheng-Ann},
title = {R$^{3}${N}et: Recurrent Residual Refinement Network for Saliency Detection},
booktitle = {IJCAI},
year = {2018}
}
Saliency Map
The results of salienct object detection on five datasets (ECSSD, HKU-IS, PASCAL-S, SOD, DUT-OMRON) can be found at Google Drive.
Trained Model
You can download the trained model which is reported in our paper at Google Drive.
Requirement
- Python 2.7
- PyTorch 0.4.0
- torchvision
- numpy
- Cython
- pydensecrf (here to install)
Training
- Set the path of pretrained ResNeXt model in resnext/config.py
- Set the path of MSRA10K dataset in config.py
- Run by
python train.py
The pretrained ResNeXt model is ported from the official torch version, using the convertor provided by clcarwin. You can directly download the pretrained model ported by me.
Hyper-parameters of training were gathered at the beginning of train.py and you can conveniently change them as you need.
Training a model on a single GTX 1080Ti GPU takes about 70 minutes.
Testing
- Set the path of five benchmark datasets in config.py
- Put the trained model in ckpt/R3Net
- Run by
python infer.py
Settings of testing were gathered at the beginning of infer.py and you can conveniently change them as you need.