Home

Awesome

Reducing Semantic Confusion: Scene-aware Aggregation Network for Remote Sensing Cross-modal Retrieval (ICMR'23 Oral)

By Jiancheng Pan, Qing Ma, Cong Bai.

This repo is the official implementation of "Reducing Semantic Confusion: Scene-aware Aggregation Network for Remote Sensing Cross-modal Retrieval"(ICMR'23 Oral).

If you want to find more RSITR methods, you can click: https://github.com/jaychempan/Awesome-RSITR

PWC PWC

ā„¹ļø Introduction

Recently, remote sensing cross-modal retrieval has received incredible attention from researchers. However, the unique nature of remote-sensing images leads to many semantic confusion zones in the semantic space, which greatly affects retrieval performance. We propose a novel scene-aware aggregation network (SWAN) to reduce semantic confusion by improving scene perception capability. In visual representation, a visual multiscale fusion module (VMSF) is presented to fuse visual features with different scales as a visual representation backbone. Meanwhile, a scene fine-grained sensing module (SFGS) is proposed to establish the associations of salient features at different granularity. A scene-aware visual aggregation representation is formed by the visual information generated by these two modules. In textual representation, a textual coarse-grained enhancement module (TCGE) is designed to enhance the semantics of text and to align visual information. Furthermore, as the diversity and differentiation of remote sensing scenes weaken the understanding of scenes, a new metric, namely, scene recall is proposed to measure the perception of scenes by evaluating scene-level retrieval performance, which can also verify the effectiveness of our approach in reducing semantic confusion. By performance comparisons, ablation studies and visualization analysis, we validated the effectiveness and superiority of our approach on two datasets, RSICD and RSITMD.

pipline

šŸŽÆ Implementation

Project Files

Notice: Get the Resnet50 pre-training weights under the AID dataset [Baidu Disk]

.
ā”œā”€ā”€ checkpoint
ā”œā”€ā”€ data
ā”‚Ā Ā  ā”œā”€ā”€ rsicd_precomp
ā”‚Ā Ā  ā””ā”€ā”€ rsitmd_precomp
ā”œā”€ā”€ data.py
ā”œā”€ā”€ engine.py
ā”œā”€ā”€ fix_data
ā”‚Ā Ā  ā”œā”€ā”€ rsicd_precomp
ā”‚Ā Ā  ā””ā”€ā”€ rsitmd_precomp
ā”œā”€ā”€ layers
ā”‚Ā Ā  ā”œā”€ā”€ aid_28-rsp-resnet-50-ckpt.pth
ā”‚Ā Ā  ā”œā”€ā”€ resnet50-19c8e357.pth
ā”‚Ā Ā  ā”œā”€ā”€ resnet.py
ā”‚Ā Ā  ā””ā”€ā”€ SWAN.py
ā”œā”€ā”€ main.py
ā”œā”€ā”€ mytools.py
ā”œā”€ā”€ README.md
ā”œā”€ā”€ save_img_text_emb.py
ā”œā”€ā”€ test_ave.py
ā”œā”€ā”€ test_local_feature.py
ā”œā”€ā”€ test_single.py
ā”œā”€ā”€ train.py
ā”œā”€ā”€ utils.py
ā”œā”€ā”€ vocab
ā”‚Ā Ā  ā”œā”€ā”€ rsicd_splits_vocab.json
ā”‚Ā Ā  ā””ā”€ā”€ rsitmd_splits_vocab.json
ā””ā”€ā”€ vocab.py

Environments

python==3.8.5
torch==1.11.0
torchvision==0.12.0

Train

# RSITMD Dataset
python train.py -g 0 -m SWAN -e SWAN --data_name rsitmd  -p checkpoint/ --epochs 50 -kf 1
# RSICD Dataset
python train.py -g 0 -m SWAN -e SWAN --data_name rsicd  -p checkpoint/ --epochs 50 -kf 1

Test

python test_single.py --resume 'path to model checkpoint'

šŸŒ Datasets

All experiments are based on RSITMD and RSICD datasets, or you can download form [Baidu Disk].

šŸ“Š Results

result

šŸ™ Acknowledgement

šŸ“ Citation

If you find this code useful for your work or use it in your project, please cite our paper as:

@inproceedings{pan2023reducing,
  title={Reducing Semantic Confusion: Scene-aware Aggregation Network for Remote Sensing Cross-modal Retrieval},
  author={Pan, Jiancheng and Ma, Qing and Bai, Cong},
  booktitle={Proceedings of the 2023 ACM International Conference on Multimedia Retrieval},
  pages={398--406},
  year={2023}
}