Home

Awesome

<h1 align="center"> LHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal Language Model </h1> <h5 align="center"><em>Dilxat Muhtar*, Zhenshi Li* , Feng Gu, Xueliang Zhang, and Pengfeng Xiao</em> </br>(*Equal Contribution)</h5> <figure> <div align="center"> <img src=https://pumpkintypora.oss-cn-shanghai.aliyuncs.com/lhrsbot.png width="20%"> </figure> <p align="center"> <a href="#news">News</a> | <a href="#introduction">Introduction</a> | <a href="#Preparation">Preparation</a> | <a href="#Training">Training</a> | <a href="#Demo">Demo</a> | <a href="#acknowledgement">Acknowledgement</a> | <a href="#statement">Statement</a> </p >

News

Introduction

We are excited to introduce LHRS-Bot, a multimodal large language model (MLLM) that leverages globally available volunteer geographic information (VGI) and remote sensing images (RS). LHRS-Bot demonstrates a deep understanding of RS imagery and possesses the capability for sophisticated reasoning within the RS domain. In this repository, we will release our code, training framework, model weights, and dataset!

<figure> <div align="center"> <img src=assets/performance.png width="50%"> </div> <div align="center"> <img src=assets/vis_example.png width="100%"> </div> </figure>

Preparation

Installation

  1. Clone this repository.

    git clone git@github.com:NJU-LHRS/LHRS-Bot.git
    cd LHRS-Bot
    
  2. Create a new virtual enviroment

    conda create -n lhrs python=3.10
    conda activate lhrs
    
  3. Install dependences and our package

    pip install -e .
    

Checkpoints

Training

Demo

Acknowledgement

Statement