Home

Awesome

SAMUS

This repo is the official implementation for:
SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation.
(The details of our SAMUS can be found in the models directory in this repo or in the paper.)

Highlights

🏆 Low GPU requirements. (one 3090ti with 24G GPU memory is enough)
🏆 Large ultrasound dataset. (about 30K images and 69K masks covering 6 categories)
🏆 Excellent performance, especially in generalization ability.

Installation

Following Segment Anything, python=3.8.16, pytorch=1.8.0, and torchvision=0.9.0 are used in SAMUS.

  1. Clone the repository.
    git clone https://github.com/xianlin7/SAMUS.git
    cd SAMUS
    
  2. Create a virtual environment for SAMUS and activate the environment.
    conda create -n SAMUS python=3.8
    conda activate SAMUS
    
  3. Install Pytorch and TorchVision. (you can follow the instructions here)
  4. Install other dependencies.
  pip install -r requirements.txt

Checkpoints

We use checkpoint of SAM in vit_b version.

Data

Training

Once you have the data ready, you can start training the model.

cd "/home/...  .../SAMUS/"
python train.py --modelname SAMUS --task <your dataset config name>

Testing

Do not forget to set the load_path in ./utils/config.py before testing.

python test.py --modelname SAMUS --task <your dataset config name>

Citation

If our SAMUS is helpful to you, please consider citing:

@misc{lin2023samus,
      title={SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation}, 
      author={Xian Lin and Yangyang Xiang and Li Zhang and Xin Yang and Zengqiang Yan and Li Yu},
      year={2023},
      eprint={2309.06824},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}