Home

Awesome

U-SAM

<p align="left"> <img src="figures/framework.png" width="100%" height="100%"> </p> This repo holds the pytorch implementation of U-SAM:<br />

Tuning Vision Foundation Models for Rectal Cancer Segmentation from CT Scans: Development and Validation of U-SAM

Model

Datasets

The following pictures are demonstrations of CARE.

<img src="figures/CARE.png" alt="CARE" style="zoom: 67%;" />

CARE2

We conducted our experiments on CARE and WORD. Here we provide public access to these datasets.

CARE: [paper] [dataset]

WORD: [paper] [dataset]

Get Started

Main Requirements

Pre-trained Weights

We utilized the SAM-ViT-B in our model, the pre-trained weights are supposed to be placed in the folder weight.

Pre-trained weights are available here, or you can directly download them via the following link.

Training

Train 100 epochs on CARE with one single GPU:

python u-sam.py --epochs 100 --batch_size 24 --dataset rectum

Train 100 epochs on CARE with multiple GPUs (via DDP, on 8 GPUs for example):

CUDA_LAUNCH_BLOCKING=1;PYTHONUNBUFFERED=1;CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch \
--master_port 29666 \
--nproc_per_node=8 \
--use_env u-sam.py \
--num_workers 4 \
--epochs 100 \
--batch_size 24 \
--dataset rectum

For convenience, you can use our default bash file:

bash train_sam.sh

Evaluation

Evaluate on CARE with one single GPU:

python u-sam.py --dataset rectum --eval --resume chkpt/best.pth

The model checkpoint for evaluation should be specified via --resume.

Feedback and Contact

For further questions, pls feel free to contact Hantao Zhang.

Acknowledgement

Our code is based on Segment Anything and SAMed. Thanks them for releasing their codes.

Citation

If this code is helpful for your study, please cite:

@article{zhang2023care,
  title={CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark Model for Rectal Cancer Segmentation},
  author={Zhang, Hantao and Guo, Weidong and Qiu, Chenyang and Wan, Shouhong and Zou, Bingbing and Wang, Wanqin and Jin, Peiquan},
  journal={arXiv preprint arXiv:2308.08283},
  year={2023}
}