Home

Awesome

Dual-Stream Feature Collaboration Perception Network for Salient Object Detection in Remote Sensing Images

⭐ This code has been completely released ⭐

⭐ our article ⭐

πŸ“– Introduction

<span style="font-size: 125%"> As the core technology of artificial intelligence, salient object detection (SOD) is an important ap-proach to improve the analysis efficiency of remote sensing images by intelligently identifying key areas in images. However, existing methods that rely on a single strategy, convolution or Trans-former, exhibit certain limitations in complex remote sensing scenarios. Therefore, we developed a Dual-Stream Feature Collaboration Perception Network (DCPNet) to enable the collaborative work and feature complementation of Transformer and CNN. First, we adopted a dual-branch feature extractor with strong local bias and long-range dependence characteristics to perform multi-scale feature extraction from remote sensing images. Then, we presented a Multi-path Complemen-tary-aware Interaction Module (MCIM) to refine and fuse the feature representations of salient targets from the global and local branches, achieving fine-grained fusion and interactive alignment of dual-branch features. Finally, we proposed a Feature Weighting Balance Module (FWBM) to balance global and local features, preventing the model from overemphasizing global information at the expense of local details or from inadequately mining global cues due to excessive focus on local information. Extensive experiments on the EORSSD and ORSSD datasets demonstrated that DCPNet outperformed the current 19 state-of-the-art methods. </span> <p align="center"> <img src="Images/Figure 1.png" width=90%"></p>

If our code is helpful to you, please cite:

@Article{electronics13183755,
AUTHOR = {Li, Hongli and Chen, Xuhui and Mei, Liye and Yang, Wei},
TITLE = {Dual-Stream Feature Collaboration Perception Network for Salient Object Detection in Remote Sensing Images},
JOURNAL = {Electronics},
VOLUME = {13},
YEAR = {2024},
NUMBER = {18},
ARTICLE-NUMBER = {3755},
URL = {https://www.mdpi.com/2079-9292/13/18/3755},
ISSN = {2079-9292},
ABSTRACT = {As the core technology of artificial intelligence, salient object detection (SOD) is an important approach to improve the analysis efficiency of remote sensing images by intelligently identifying key areas in images. However, existing methods that rely on a single strategy, convolution or Transformer, exhibit certain limitations in complex remote sensing scenarios. Therefore, we developed a Dual-Stream Feature Collaboration Perception Network (DCPNet) to enable the collaborative work and feature complementation of Transformer and CNN. First, we adopted a dual-branch feature extractor with strong local bias and long-range dependence characteristics to perform multi-scale feature extraction from remote sensing images. Then, we presented a Multi-path Complementary-aware Interaction Module (MCIM) to refine and fuse the feature representations of salient targets from the global and local branches, achieving fine-grained fusion and interactive alignment of dual-branch features. Finally, we proposed a Feature Weighting Balance Module (FWBM) to balance global and local features, preventing the model from overemphasizing global information at the expense of local details or from inadequately mining global cues due to excessive focus on local information. Extensive experiments on the EORSSD and ORSSD datasets demonstrated that DCPNet outperformed the current 19 state-of-the-art methods.},
DOI = {10.3390/electronics13183755}
}

Saliency maps

We provide saliency maps of our and compared methods at here on two datasets (ORSSD and EORSSD).

DateSets

ORSSD download at here

EORSSD download at here

The structure of the dataset is as follows:

DCPNet
β”œβ”€β”€ EORSSD
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”‚   β”œβ”€β”€ 0001.jpg
β”‚   β”‚   β”‚   β”œβ”€β”€ 0002.jpg
β”‚   β”‚   β”‚   β”œβ”€β”€ .....
β”‚   β”‚   β”œβ”€β”€ lables
β”‚   β”‚   β”‚   β”œβ”€β”€ 0001.png
β”‚   β”‚   β”‚   β”œβ”€β”€ 0002.png
β”‚   β”‚   β”‚   β”œβ”€β”€ .....
β”‚   β”‚   
β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”‚   β”œβ”€β”€ 0004.jpg
β”‚   β”‚   β”‚   β”œβ”€β”€ 0005.jpg
β”‚   β”‚   β”‚   β”œβ”€β”€ .....
β”‚   β”‚   β”œβ”€β”€ lables
β”‚   β”‚   β”‚   β”œβ”€β”€ 0004.png
β”‚   β”‚   β”‚   β”œβ”€β”€ 0005.png
β”‚   β”‚   β”‚   β”œβ”€β”€ .....

Train

  1. Download the dataset.

  2. Use data_aug.m to augment the training set of the dataset.

  3. Download backbone weight at pretrain, and put it in './pretrain/'.

  4. Modify paths of datasets, then run train_MyNet.py.

Test

  1. Download the pre-trained models of our network at weight
  2. Modify paths of pre-trained models and datasets.
  3. Run test_MyNet.py.

Results

Main results on ORSSD dataset

MethodsS<sub>Ξ±</sub>MAEadp E<sub>ΞΎ</sub>mean E<sub>ΞΎ</sub>max E<sub>ΞΎ</sub>adp F<sub>Ξ²</sub>mean F<sub>Ξ²</sub>max F<sub>Ξ²</sub>
SAMNet0.87610.02170.86560.88180.94780.68430.75310.8137
HVPNet0.86100.02250.84710.87170.93200.67260.73960.7938
DAFNet0.91910.01130.93600.95390.97710.78760.85110.8928
MSCNet0.92270.01290.95840.96530.97540.83500.86760.8927
MJRBM0.92040.01630.93280.94150.96230.80220.85660.8842
PAFR0.89380.02110.93150.92680.94670.80250.82750.8438
CorrNet0.93800.00980.97210.97460.97900.88750.90020.9129
EMFINet0.94320.00950.97150.97260.98130.87970.90000.9155
MCCNet0.94370.00870.97350.97580.98000.89570.90540.9155
ACCoNet0.94370.00880.97210.97540.97960.88060.89710.9149
AESINet0.94600.00860.97070.97470.98280.86660.89860.9183
ERPNet0.92540.01350.95200.85660.97100.83560.87450.8974
ADSTNet0.93790.00860.97850.97400.98070.89790.90420.9124
SFANet0.94530.00700.97650.97890.98300.89840.90630.9192
VST0.93650.00940.94660.96210.98100.82620.88170.9095
ICON0.92560.01160.95540.96370.97040.84440.86710.8939
HFANet0.93990.00920.97220.97120.97700.88190.89810.9112
TLCKDNet0.94210.00820.96960.97100.97940.87190.89470.9114
ASNet0.94410.00810.97950.97640.98030.89860.90720.9172
Ours0.94980.00730.98090.98150.98550.90400.91240.9251

Main results on EORSSD dataset

MethodsS<sub>Ξ±</sub>MAEadp E<sub>ΞΎ</sub>mean E<sub>ΞΎ</sub>max E<sub>ΞΎ</sub>adp F<sub>Ξ²</sub>mean F<sub>Ξ²</sub>max F<sub>Ξ²</sub>
SAMNet0.86220.01320.82840.87000.94210.61140.72140.7813
HVPNet0.87340.01100.82700.87210.94820.62020.73770.8036
DAFNet0.91660.00600.84430.92900.98590.64230.78420.8612
MSCNet0.90710.00900.93290.95510.96890.75530.81510.8539
MJRBM0.91970.00990.88970.93500.96460.70660.82390.8656
PAFR0.89270.01190.89590.92100.94900.71230.79610.8260
CorrNet0.92890.00830.95930.96460.96960.83110.86200.8778
EMFINet0.93190.00750.95000.95980.97120.80360.85050.8742
MCCNet0.93270.00660.95380.96850.97550.81370.86040.8904
ACCoNet0.92900.00740.94500.96530.97270.79690.85520.8837
AESINet0.93580.00790.94620.96360.97510.79230.85240.8838
ERPNet0.92100.00890.92280.94010.96030.75540.83040.8632
ADSTNet0.93110.00650.96810.97090.97690.85320.87160.8804
SFANet0.93490.00580.96690.97260.97690.84920.86800.8833
VST0.92080.00670.89410.94420.97430.70890.82630.8716
ICON0.91850.00730.94970.96190.96870.80650.83710.8622
HFANet0.93800.00700.96440.96790.97400.83650.86810.8876
TLCKDNet0.93500.00560.95140.96610.97880.79690.85350.8843
ASNet0.93450.00550.97480.97450.97830.86720.87700.8959
Ours0.94080.00530.97720.97730.98170.86950.88120.8936

Visualization of results

<p align="center"> <img src="Images/Figure 4.png" width=95%"></p>

Evaluation Tool

You can use the evaluation tool (MATLAB version) to evaluate the above saliency maps.

ORSI-SOD Summary

Salient Object Detection in Optical Remote Sensing Images Read List at here

Acknowledgements

This code is built on PyTorch.

Contact

If you have any questions, please submit an issue on GitHub or contact me by email (cxh1638843923@gmail.com).