Home

Awesome

AFNet

Code for paper in CVPR2019, 'Attentive Feedback Network for Boundary-aware Salient Object Detection', Mengyang Feng, Huchuan Lu and Errui Ding. [google drive] [baidu yun]

Contact: Mengyang Feng, Email: mengyang_feng@mail.dlut.edu.cn

AFNet pipline

Abstract Recent deep learning based salient object detection methods achieve gratifying performance built upon Fully Convolutional Neural Networks (FCNs). However, most of them have suffered from the boundary challenge. The state-of-the-art methods employ feature aggregation technique and can precisely find out wherein the salient object, but they often fail to segment out the entire object with fine boundaries, especially those raised narrow stripes. So there is still a large room for improvement over the FCN based models. In this paper, we design the Attentive Feedback Modules (AFMs) to better explore the structure of objects. A Boundary-Enhanced Loss (BEL) is further employed for learning exquisite boundaries. Our proposed deep model produces satisfying results on the object boundaries and achieves state-of-the-art performance on five widely tested salient object detection benchmarks. The network is in a fully convolutional fashion running at a speed of 26 FPS and does not need any post-processing.

pre-computed saliency maps [google drive] [baidu yun] (Fetch Code: jjhs)

Usage

  1. Clone this repo into your computer
git clone https://github.com/ArcherFMY/AFNet.git
  1. Cd to AFNet/caffe, follow the official instructions to build caffe. We provide our make file my-Makefile.config in folder AFNet/caffe.

The code has been tested successfully on Ubuntu 14.04 with CUDA 8.0 and OpenCV 3.1.0

  1. Make caffe & matcaffe
make all -j
make matcaffe -j
  1. Download pretrained caffemodel from [google drive] or [baidu yun] (Fetch Code: sifm) and extract the .zip file under the root directory AFNet/.

  2. Put the test image in AFNet/test-Image/ and run test_AFNet.m to get the saliency maps. The results will be saved in AFNet/results/AFNet/

Performance Preview

Quantitative comparisons table2

Qualitative comparisons fig5

The scores are computed using this evaluation tool box

Citation

@InProceedings{Feng_2019_CVPR,
   author = {Feng, Mengyang and Lu, Huchuan and Ding, Errui},
   title = {Attentive Feedback Network for Boundary-aware Salient Object Detection},
   booktitle = CVPR,
   year = {2019}
}