Home

Awesome

<div align="center">

RemoteCLIP🛰️: A Vision Language Foundation Model for Remote Sensing

Fan Liu (刘凡)✉ * <img src="assets/hhu_logo.png" alt="Logo" width="15">,     Delong Chen (陈德龙)✉ * <img src="assets/hkust_logo.png" alt="Logo" width="10">,     Zhangqingyun Guan (管张青云) <img src="assets/hhu_logo.png" alt="Logo" width="15">

Xiaocong Zhou (周晓聪) <img src="assets/hhu_logo.png" alt="Logo" width="15">,     Jiale Zhu (朱佳乐) <img src="assets/hhu_logo.png" alt="Logo" width="15">,    

Qiaolin Ye (业巧林) <img src="assets/nfu_logo.png" alt="Logo" width="15">,     Liyong Fu (符利勇) <img src="assets/caf_logo.jpg" alt="Logo" width="15">,     Jun Zhou (周峻) <img src="assets/griffith_logo.png" alt="Logo" width="15">

<img src="assets/hhu_logo_text.png" alt="Logo" width="100">         <img src="assets/hkust_logo_text.png" alt="Logo" width="100">         <img src="assets/nfu_logo_text.jpg" alt="Logo" width="50">         <img src="assets/caf_logo.jpg" alt="Logo" width="40">         <img src="assets/griffith_logo_text.png" alt="Logo" width="90">

* Equal Contribution

</div>

News

Introduction

Welcome to the official repository of our paper "RemoteCLIP: A Vision Language Foundation Model for Remote Sensing"!

General-purpose foundation models have become increasingly important in the field of artificial intelligence. While self-supervised learning (SSL) and Masked Image Modeling (MIM) have led to promising results in building such foundation models for remote sensing, these models primarily learn low-level features, require annotated data for fine-tuning, and are not applicable for retrieval and zero-shot applications due to the lack of language understanding.

In response to these limitations, we propose RemoteCLIP, the first vision-language foundation model for remote sensing that aims to learn robust visual features with rich semantics, as well as aligned text embeddings for seamless downstream application. To address the scarcity of pre-training data, we leverage data scaling, converting heterogeneous annotations based on Box-to-Caption (B2C) and Mask-to-Box (M2B) conversion, and further incorporating UAV imagery, resulting in a 12xlarger pretraining dataset.

RemoteCLIP can be applied to a variety of downstream tasks, including zero-shot image classification, linear probing, k-NN classification, few-shot classification, image-text retrieval, and object counting. Evaluations on 16 datasets, including a newly introduced RemoteCount benchmark to test the object counting ability, show that RemoteCLIP consistently outperforms baseline foundation models across different model scales.

Impressively, RemoteCLIP outperforms previous SoTA by 9.14% mean recall on the RSICD dataset and by 8.92% on RSICD dataset</u>. For zero-shot classification, our RemoteCLIP outperforms the CLIP baseline by up to 6.39% average accuracy on 12 downstream datasets.

Load RemoteCLIP

RemoteCLIP is trained with the ITRA codebase, and we have converted the pretrained checkpoints to OpenCLIP compatible format and uploaded them to [this Huggingface Repo], such that accessing the model could be more convenient!

Retrieval Evaluation

To perform cross-modal retrieval with RemoteCLIP, we extract image and text representations on the test split, perform L-2 normalization, and retrieval most similar samples based on the dot-product similarity measure. We show the retrieval recall of top-1 (R@1), top-5 (R@5), top-10 (R@10), and the mean recall of these values.

We have prepared a retrieval.py script to replicate the retrieval evaluation. Follow the steps below to evaluate the retrieval performance of RemoteCLIP on the RSITMD, RSICD, and UCM datasets:

Acknowledgments

Citation

If you find this work useful, please cite our paper as:

@article{remoteclip,
  author       = {Fan Liu and
                  Delong Chen and
                  Zhangqingyun Guan and
                  Xiaocong Zhou and
                  Jiale Zhu and
                  Qiaolin Ye and
                  Liyong Fu and
                  Jun Zhou},
  title        = {RemoteCLIP: {A} Vision Language Foundation Model for Remote Sensing},
  journal      = {{IEEE} Transactions on Geoscience and Remote Sensing},
  volume       = {62},
  pages        = {1--16},
  year         = {2024},
  url          = {https://doi.org/10.1109/TGRS.2024.3390838},
  doi          = {10.1109/TGRS.2024.3390838},
}