Home

Awesome

The offical code for paper "Learning to Evaluate Performance of Multi-modal Semantic Localization", TGRS 2022.

Author: Zhiqiang Yuan, Chongyang Li, Zhuoying Pan, et. al

<a href="https://github.com/xiaoyuan1996/retrievalSystem"><img src="https://travis-ci.org/Cadene/block.bootstrap.pytorch.svg?branch=master"/></a> Supported Python versions Supported OS npm License <a href="https://pypi.org/project/mitype/"><img src="https://img.shields.io/pypi/v/mitype.svg"></a>

-------------------------------------------------------------------------------------

Welcome :+1:<big>Fork and Star</big>:+1:, then we'll let you know when we update

-------------------------------------------------------------------------------------

We recently released SeLo v2[link]., which improves SeLo from the speed and accuracy.

-------------------------------------------------------------------------------------

Contexts

-------------------------------------------------------------------------------------

INTRODUCTION

An official evaluation metric for semantic localization.

<img src="https://github.com/xiaoyuan1996/SemanticLocalizationMetrics/blob/master/figure/compare.jpg" width="700" alt="compare"/>

Fig.1. (a) Results of airplane detection. (b) Results of semantic localization with query of ``white planes parked in the open space of the white airport''. Compared with tasks such as detection, SeLo achieves semantic-level retrieval with only caption-level annotation during training, which can adapt to higher-level retrieval tasks.

<img src="https://github.com/xiaoyuan1996/SemanticLocalizationMetrics/blob/master/figure/demo.gif" width="700" alt="shwon"/>

Fig.2. Visualization of SeLo with query of "the red rails where the grey train is located run through the residential area".

The semantic localization (SeLo) task refers to using cross-modal information such as text to quickly localize RS images at the semantic level [link]. This task implements semantic-level detection, which only uses caption-level supervision information. In our opinion, it is a meaningful and interesting work, which realizes the unification of sub-tasks such as detection and segmentation.

visual image

Fig.3. Framework of Semantic Localization. After multi-scale segmentation of large RS images, we perform cross-modal similarity calculation on query and multiple slices. The calculated regional probabilities are then utilized to pixel-level averaging, which generates the SeLo map after further noise suppression.

We contribute test sets, evaluation metrics and baselines for semantic localization, and provide a detailed demo to use this evaluation framework. Any questions can open a Github issue. Start and enjoy!

-------------------------------------------------------------------------------------

DATASET AND METRICS

TESTDATA

We contribute a semantic localization testset to provide systematic evaluation for SeLo task. The images in SLT come from Google Earth, and Fig. 4 exhibits several samples from the testset. Every sample includes a large image in RS scene with the size of 3k × 2k to 10k × 10k, a query sentence, and one or more corresponding semantic bounding boxes.

<img src="https://github.com/xiaoyuan1996/SemanticLocalizationMetrics/blob/master/figure/sample.jpg" width="700" alt="sample"/>

Fig.4. Four samples of Semantic Localization Testset. (a) Query: “ships without cargo floating on the black sea are docked in the port”. (b) Query: “a white airplane ready to take off on a grayblack runway”. (c) Query: “some cars are parked in a parking lot surrounded by green woods”. (d) Query: “the green football field is surrounded by a red track”.

TABLE I Quantitative Statistics of Semantic Localization Testset.

ParameterValueParameterValue
Word Number160Caption Ave Length11.2
Sample Number59Ave Resolution Ratio (m)0.3245
Channel Number3Ave Region Number1.75
Image Number22Ave Attention Ratio0.068

METRICS

We systematically model and study semantic localization in detail, and propose multiple discriminative evaluation metrics to quantify this task based on significant area proportion, attention shift distance, and discrete attention distance.

<img src="https://github.com/xiaoyuan1996/SemanticLocalizationMetrics/blob/master/figure/indicator.jpg" width="900" alt="shwon"/>

Fig.5. Three proposed evaluation metrics for semantic localization. (a) Rsu aims to calculate the attention ratio of the ground-truth area to the useless area. (b) Ras attempts to quantify the shift distance of the attention from the GT center. (c) Rda evaluates the discreteness of the generated attention from probability divergence distance and candidate attention number.

TABLE II Explanation of the indicators.

IndicatorRangeMeaning
Rsu↑ [ 0 ~ 1 ]Calc the salient area proportion
Ras↓ [ 0 ~ 1 ]Makes attention center close to annotation center
Rda↑ [ 0 ~ 1 ]Makes attention center focus on one point
Rmi↑ [ 0 ~ 1 ]Calculate the mean indicator of SeLo task
<img src="https://github.com/xiaoyuan1996/SemanticLocalizationMetrics/blob/master/figure/indicator_verify.jpg" width="700" alt="shwon"/>

Fig.6. Qualitative analysis of SeLo indicator validity. (a) Query: “eight large white oil storage tanks built on grey concrete floor”. (b) Query: “a white plane parked in a tawny clearing inside the airport”. (c) Query: “lots of white and black planes parked inside the grey and white airport”.

BASELINES

All experiments all carried out at Intel(R) Xeon(R) Gold 6226R CPU @2.90GHz and a single NVIDIA RTX 3090 GPU.

Comparison of SeLo Performance on Different Trainsets

Trainset↑ Rsu↑ Rda↓ Ras↑ Rmi
Sydney0.58440.56700.50260.5496
UCM0.58210.47150.52770.5160
RSITMD0.69200.66670.33230.6772
RSICD0.66610.57730.38750.6251

Comparison of SeLo Performance on Different Scales

Scale-128Scale-256Scale-512Scale-768↑ Rsu↑ Rda↓ Ras↑ RmiTime (m)
s10.63890.64880.28780.667033.81
s20.68390.60300.33260.657914.25
s30.68970.63710.39330.647511.23
s40.66820.70720.26940.699834.60
s50.69200.66670.33230.677216.92
s60.68090.68840.30250.688636.28

Comparison of SeLo Performance on Different Retrieval Models

Trainset↑ Rsu↑ Rda↓ Ras↑ RmiTime (m)
VSE++0.63640.58290.41660.604515.61
LW-MCR0.66980.60210.43350.616715.47
SCAN0.64210.61320.38710.624716.32
CAMP0.68190.63140.39120.643718.24
AMFMN0.69200.66670.33230.677216.92

Analysis of Time Consumption

Scale (128, 256)CutSimGntFltTotal
Times(m)2.8520.607.400.7333.81
Rate(%)8.4260.9421.882.16-
Scale (512, 768)CutSimGntFltTotal
Times(m)0.461.176.960.6711.23
Rate(%)4.0610.4261.985.97-
Scale (256, 512, 768)CutSimGntFltTotal
Times(m)0.935.727.380.7416.92
Rate(%)5.5233.8243.604.37-

IMPLEMENTATION

ENVIRONMENT

1.Pull our project and install the requirements, make sure the code path only include English:

   $ apt-get install python3
   $ git clone git@github.com:xiaoyuan1996/SemanticLocalizationMetrics.git
   $ cd SemanticLocalizationMetrics
   $ pip install -r requirements.txt

2.Prepare checkpoints and test iamges:

3.Download SkipThought Files to /data from seq2vec (Password:NIST) (or other path, but you should change the option['model']['seq2vec']['dir_st'])

4.Check the environments

    $ cd predict
    $ python model_encoder.py
    
    visual_vector: (512,)
    text_vector: (512,)
    Encoder test successful!
    Calc sim successful!

RUN THE DEMO

Run the follow command, and generated SeLo maps will be saved in cache/.

   $ cd predict
   $ nohup python generate_selo.py --cache_path cache/RSITMD_AMFMN
   $ tail -f cache/RSITMD_AMFMN/log.txt

    2022-05-05 22:01:58,180 - __main__ - INFO - Processing 31/59: 20.jpg
    2022-05-05 22:01:58,180 - __main__ - INFO - Corresponding text: lots of white and black planes parked inside the grey and white airport.
    
    2022-05-05 22:01:59,518 - __main__ - INFO - img size:10000x10001
    2022-05-05 22:01:59,518 - __main__ - INFO - Start split images with step 256
    2022-05-05 22:02:02,657 - __main__ - INFO - Start split images with step 512
    2022-05-05 22:02:04,077 - __main__ - INFO - Start split images with step 768
    2022-05-05 22:02:04,818 - __main__ - INFO - Image ../test_data/imgs/20.jpg has been split successfully.
    2022-05-05 22:02:04,819 - __main__ - INFO - Start calculate similarities ...
    2022-05-05 22:02:48,182 - __main__ - INFO - Calculate similarities in 43.36335849761963s
    2022-05-05 22:02:48,182 - __main__ - INFO - Start generate heatmap ...
    2022-05-05 22:03:40,673 - __main__ - INFO - Generate finished, start optim ...
    2022-05-05 22:03:45,500 - __main__ - INFO - Generate heatmap in 57.31790471076965s
    2022-05-05 22:03:45,500 - __main__ - INFO - Saving heatmap in cache/heatmap_31.jpg ...
    2022-05-05 22:03:45,501 - __main__ - INFO - Saving heatmap in cache/addmap_31.jpg ...
    2022-05-05 22:03:45,501 - __main__ - INFO - Saving heatmap in cache/probmap_31.jpg ...
    2022-05-05 22:03:48,540 - __main__ - INFO - Saved ok.
    2022-05-05 22:03:59,562 - root - INFO - Eval cache/probmap_31.jpg
    2022-05-05 22:03:59,562 - root - INFO - +++++++ Calc the SLM METRICS +++++++
    2022-05-05 22:03:59,562 - root - INFO - ++++     ↑ Rsu [0 ~ 1]:0.9281   ++++
    2022-05-05 22:03:59,562 - root - INFO - ++++     ↑ Rda [0 ~ 1]:0.4689   ++++
    2022-05-05 22:03:59,562 - root - INFO - ++++     ↓ Ras [0 ~ 1]:0.0633   ++++
    2022-05-05 22:03:59,562 - root - INFO - ++++     ↑ Rmi [0 ~ 1]:0.8163   ++++
    2022-05-05 22:03:59,562 - root - INFO - ++++++++++++++++++++++++++++++++++++
    ...  

   $ ls cache/RSITMD_AMFMN

CUSTOMIZE MODEL

  1. Put the pretrain ckpt file to checkpoints.
  2. Add your own model to layers and corresponding config yaml to options/.
  3. Change model_init.model_init to your own models.
  4. Add the class of EncoderSpecModel to model_encoder.py.
  5. Run:
python generate_selo.py --yaml_path option/xxx.yaml --cache_path cache/xxx

EPILOGUE

So far, our attitude towards the semantic localization task is positive and optimistic, which realizes the detection at the semantic level with only the annotation at the caption level. We sincerely hope that this project will facilitate the development of semantic localization tasks. We welcome researchers to look into this direction, which is a possibility to achieve refined object semantic detection.

visual image

Fig.7. Combine SeLo with other tasks. The top of the figure shows the detection results after add the SeLo map with query of “two parallel green playgrounds”. The bottom of the figure shows the road extraction results after add the SeLo map with query of “the red rails where the grey train is located run through the residential area”. (a) Source images. (b) Results of specific tasks. (c) Results of specific SeLo maps. (d) Fusion results of specific tasks and SeLo map.

CITATION

Z. Yuan et al., "Learning to Evaluate Performance of Multi-modal Semantic Localization," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2022.3207171.

OTHER CITATION

Z. Yuan et al., "Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3078451.

Z. Yuan et al., "A Lightweight Multi-scale Crossmodal Text-Image Retrieval Method In Remote Sensing," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3124252.

Z. Yuan et al., "Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2022.3163706.