Home

Awesome

PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images

<p align="center"> <img src="https://i.imgur.com/waxVImv.png" alt="Oryx Video-ChatGPT"> </p>

paper


šŸ“¢ Latest Updates


šŸŽØ Overview

PDF Page

šŸŽ® Getting Started

1.Install Environment

conda create --name pointsam python=3.10
conda activate pointsam

pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118
git clone https://github.com/Lans1ng/PointSAM.git
cd PointSAM
pip install -r requirements.txt

cd segment_anything_2
pip install -e .
cd ..

2.Prepare Dataset

WHU Building Dataset

HRSID Dataset

NWPU VHR-10 Dataset

For convenience, we have included all the JSON annotations in this repo, and you only need to download the corresponding images. Specifically, organize the dataset as follows:

data 
ā”œā”€ā”€ WHU
ā”‚    ā”œā”€ā”€ annotations
ā”‚    ā”‚   ā”œā”€ā”€ WHU_building_train.json
ā”‚    ā”‚   ā”œā”€ā”€ WHU_building_test.json
ā”‚    ā”‚   ā””ā”€ā”€ WHU_building_val.json
ā”‚    ā””ā”€ā”€ images
ā”‚        ā”œā”€ā”€ train
ā”‚        ā”‚    ā”œā”€ā”€ image
ā”‚        ā”‚    ā””ā”€ā”€ label
ā”‚        ā”œā”€ā”€ val
ā”‚        ā”‚    ā”œā”€ā”€ image
ā”‚        ā”‚    ā””ā”€ā”€ label
ā”‚        ā””ā”€ā”€ test
ā”‚             ā”œā”€ā”€ image
ā”‚             ā””ā”€ā”€ label
ā”œā”€ā”€ HRSID
ā”‚    ā”œā”€ā”€ Annotations
ā”‚    ā”‚   ā”œā”€ā”€ all
ā”‚    ā”‚   ā”œā”€ā”€ inshore
ā”‚    ā”‚   ā”‚      ā”œā”€ā”€ inshore_test.json
ā”‚    ā”‚   ā”‚      ā””ā”€ā”€ inshore_train.json       
ā”‚    ā”‚   ā””ā”€ā”€ offshore
ā”‚    ā””ā”€ā”€ Images
ā””ā”€ā”€ NWPU
     ā”œā”€ā”€ Annotations
     ā”‚   ā”œā”€ā”€ NWPU_instnaces_train.json
     ā”‚   ā””ā”€ā”€ NWPU_instnaces_val.json
     ā””ā”€ā”€ Images

šŸ’” Acknowledgement

šŸ–Šļø Citation

If you find this project useful in your research, please consider cite:

@article{liu2024pointsam,
  title={PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images},
  author={Liu, Nanqing and Xu, Xun and Su, Yongyi and Zhang, Haojie and Li, Heng-Chao},
  journal={arXiv preprint arXiv:2409.13401},
  year={2024}
}