Awesome
GeleNet
This project provides the code and results for 'Salient Object Detection in Optical Remote Sensing Images Driven by Transformer', IEEE TIP, 2023. IEEE and arxiv Homepage
Network Architecture
<div align=center> <img src="https://github.com/MathLee/GeleNet/blob/main/images/GeleNet.png"> </div>Requirements
python 3.8 + pytorch 1.9.0
Saliency maps
We provide saliency maps of our GeleNet on three datasets in './GeleNet_saliencymap_PVT.zip' (PVT-v2-b2 backbone) and './GeleNet_saliencymap_SwinT.zip' (Swin Transformer backbone).
We also provide saliency maps of all compared methods (code: 2892) on three datasets.
Training
We use data_aug.m for data augmentation.
Download pvt_v2_b2.pth (code: sxiq), and put it in './model/'.
Modify paths of datasets, then run train_GeleNet.py.
Note: Our main model is under './model/GeleNet_models.py' (PVT-v2-b2 backbone)
Pre-trained model and testing
-
Download the pre-trained models (PVT-v2-b2 backbone) on ORSSD (code: qga2), EORSSD (code: ahm7), and ORSI-4199 (code: 5h3u), and put them in './models/'.
-
Modify paths of pre-trained models and datasets.
-
Run test_GeleNet.py.
Evaluation Tool
You can use the evaluation tool (MATLAB version) to evaluate the above saliency maps.
ORSI-SOD_Summary
Citation
@ARTICLE{Li_2023_GeleNet,
author = {Gongyang Li and Zhen Bai and Zhi Liu and Xinpeng Zhang and Haibin Ling},
title = {Salient Object Detection in Optical Remote Sensing Images Driven by Transformer},
journal = {IEEE Transactions on Image Processing},
volume = {32},
pages = {5257-5269},
year = {2023},
}
If you encounter any problems with the code, want to report bugs, etc.
Please contact me at lllmiemie@163.com or ligongyang@shu.edu.cn.