Awesome
PGNet
<p align="center"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/figure/PGNet.png?raw=true" width="85%"> </p>Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022,
CVPR 2022 (arXiv 2204.05041)
Abstract
Recent salient object detection (SOD) methods based on deep neural network have achieved remarkable performance. However, most of existing SOD models designed for low-resolution input perform poorly on high-resolution images due to the contradiction between the sampling depth and the receptive field size. Aiming at resolving this contradiction, we propose a novel one-stage framework called Pyramid Grafting Network (PGNet), using transformer and CNN backbone to extract features from different resolution images independently and then graft the features from transformer branch to CNN branch. An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically, guided by different source feature during decoding process. Moreover, we design an Attention Guided Loss (AGL) to explicitly supervise the attention matrix generated by CMGM to help the network better interact with the attention from different models. We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions. To our knowledge, it is the largest dataset in both quantity and resolution for high-resolution SOD task, which can be used for training and testing in future research. Sufficient experiments on UHRSD and widely-used SOD datasets demonstrate that our method achieves superior performance compared to the state-of-the-art methods.
Ultra High-Resolution Saliency Detection Dataset
<p class="third"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/figure/005960.gif?raw=true" width="30%"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/figure/005937.gif?raw=true" width="30%"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/figure/005871.gif?raw=true" width="30%"> </p> <!-- <p align="center"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/005937.gif?raw=true" width="40%"> </p> --> <!-- <p align="center"> <img src="https://github.com/iCVTEAM/PGNet/blob/master/005871.gif?raw=true" width="85%"> </p> -->Visual display for sample in UHRSD dataset. Best viewd by clikcing and zooming in.
To relief the lack of high-resolution datasets for SOD, we contribute the Ultra High-Resolution for Saliency Detection (UHRSD) dataset with a total of 5,920 images in 4K(3840 × 2160) or higher resolution, including 4,932 images for training and 988 images for testing. A total of 5,920 images were manually selected from websites (e.g. Flickr Pixabay) with free copyright. Our dataset is diverse in terms of image scenes, with a balance of complex and simple salient objects of various size.
To our knowledge, it is the largest dataset in both quantity and resolution for high-resolution SOD task, which can be used for training and testing in future research.
- Our UHRSD (Ultra High-Resolution Saliency Detection) Dataset:
We provide the original 4K version and the convenient 2K version of our UHRSD (Ultra High-Resolution Saliency Detection) Dataset for download: Google Drive
<!-- We also provide the resized version UHRSD_2k for convenient downloads in the same url: [Google Drive](https://drive.google.com/drive/folders/1u3K65AaKh78P5qKXTsMjVI1SvBXNAPFk?usp=sharing). -->Usage
Requirements
- Python 3.8
- Pytorch 1.7.1
- OpenCV
- Numpy
- Apex
- Timm
Directory
The directory should be like this:
-- src
-- model (saved model)
-- pre (pretrained model)
-- result (saliency maps)
-- data (train dataset and test dataset)
|-- DUTS-TR+HR
| |-- image
| |-- mask
|-- UHRSOD+HRSOD
| |--image
| |--mask
...
Train
cd src
./train.sh
- We implement our method by PyTorch and conduct experiments on 2 NVIDIA 2080Ti GPUs.
- We adopt pre-trained ResNet-18 and Swin-B-224 as backbone networks, which are saved in PRE folder.
- We train our method on 3 settings : DUTS-TR, DUTS-TR+HRSOD and UHRSD_TR+HRSOD_TR.
- After training, the trained models will be saved in MODEL folder.
Test
The trained model can be download here: Google Drive
cd src
python test.py
- After testing, saliency maps will be saved in RESULT folder
Saliency Map
Trained on DUTS-TR:Google Drive
Trained on DUT+HRSOD:Google Drive
Trained on UHRSD+HRSOD:Google Drive
Citation
@inproceedings{xie2022pyramid,
author = {Xie, Chenxi and Xia, Changqun and Ma, Mingcan and Zhao, Zhirui and Chen, Xiaowu and Li, Jia},
title = {Pyramid Grafting Network for One-Stage High Resolution Saliency Detection},
booktitle = {CVPR},
year = {2022}
}