Awesome
EarthMarker: A Visual Prompting MLLM for Region-level and Point-level Remote Sensing Imagery Comprehension
Official repository for EarthMarker.
Authors: Wei Zhang*, Miaoxin Cai*, Tong Zhang, Yin Zhuang, and Xuerui Mao
- The authors contributed equally to this work.
:mega: News
- The dataset, model, code, and demo are coming soon! :rocket:
- [2024.07.19]: The paper for EarthMarker is released arxiv. :fire::fire:
:sparkles: Introduction
The first visual prompting MLLM named EarthMarker is proposed. EarthMarker can interpret RS imagery in the multi-turn conversation at different granularity, including image, region, and point levels, significantly catering to the fine-grained interpretation needs for RS imagery. EarthMarker is capable of various RS visual tasks including scene classification, referring object classification, captioning, and relationship analyses, beneficial to making informed decisions in real-world applications.
<div align="center"> <img src="VP-example.png"> </div>:bookmark: Citation
@article{zhang2024earthmarker,
title={EarthMarker: A Visual Prompt Learning Framework for Region-level and Point-level Remote Sensing Imagery Comprehension},
author={Zhang, Wei and Cai, Miaoxin and Zhang, Tong and Zhuang, Yin and Mao, Xuerui},
journal={arXiv preprint arXiv:2407.13596},
year={2024}
}
:memo: Acknowledgment
This paper benefits from llama. Thanks for their wonderful work.