Home

Awesome

EarthMarker: A Visual Prompting Multi-modal Large Language Model for Remote Sensing

Official repository for EarthMarker.

Authors: Wei Zhang*, Miaoxin Cai*, Tong Zhang, Yin Zhuang, and Xuerui Mao

:mega: News

:sparkles: Introduction

The first visual prompting MLLM named EarthMarker is proposed. EarthMarker can interpret RS imagery in the multi-turn conversation at different granularity, including image, region, and point levels, significantly catering to the fine-grained interpretation needs for RS imagery. EarthMarker is capable of various RS visual tasks including scene classification, referring object classification, captioning, and relationship analyses, beneficial to making informed decisions in real-world applications.

<div align="center"> <img src="VP-example.png"> </div>

:bookmark: Citation

@article{zhang2024earthmarker,
  title={EarthMarker: A Visual Prompting Multi-modal Large Language Model for Remote Sensing},
  author={Zhang, Wei and Cai, Miaoxin and Zhang, Tong and Li, Jun and Zhuang, Yin and Mao, Xuerui},
  journal={arXiv preprint arXiv:2407.13596},
  year={2024}
}

:memo: Acknowledgment

This paper benefits from llama. Thanks for their wonderful work.