Awesome
SonarSAM
This study presents the introduction of the Segment-Anything-Model (SAM) to sonar images. We conduct a comprehensive investigation into fine-tuning methods for SAM, including LoRA and visual prompt tuning. To facilitate comparison, we provide a framework that integrates these fine-tuning methods for SAM. If this project is helpful to your research, please consider citing our paper PDF.
@article{wang2023sonarsam,
title={When SAM Meets Sonar Images},
author={Wang, Lin and Ye, Xiufen and Zhu, Liqiang and Wu, Weijie and Zhang, Jianguo and Xing, Huiming and Hu, Chao},
journal={arXiv preprint arXiv:2306.14109},
year={2023}
}
Update
- 2023-06-30 Support fine-tuning with LoRA on Mobile SAM backbone.
- 2023-06-29 Support fully fine-tuning on Mobile SAM backbone.
Dataset
The Marine Debris dataset is used in this work, which is available at Forward-Looking Sonar Marine Debris Datasets.
Training
- Using box prompts
python train_SAM_box.py --config ./configs/sam_box.yaml
- Semantic segmentation
python train_SAM.py --config ./configs/sam.yaml
License
The model is licensed under the Apache 2.0 license.
Acknowledgment
This project was developed based on the following awesome codes.
- Segment Anything Model: SAM
- Prompt layer & Custom segmentation head: LearnablePromptSAM
- LoRA: SAMed