Awesome
[Swin-LiteMedSAM] Submission to the SEGMENT ANYTHING IN MEDICAL IMAGES ON LAPTOP 2024 CVPR challenge [Teamname: lkeb]
<p align="center"> <img src="asset/Swin-LiteMedSAM.png" width="800"/> </p>Install
- Create a virtual environment
conda create -n swin_litemedsam python=3.10 -y
and activate itconda activate swin_litemedsam
- Install Pytorch 2.x
- Enter the Swin_MedSAM folder
cd Swin_LiteMedSAM
and runpip install -e .
Model
- Download the model checkpoint and place it at
workdir
- Donwload the docker image and load it
docker load -i lkeb.tar.gz
for quick inference
Usage
-
Data
Download training npz data from challenge website, the training data contained 11 modalities, including CT, MRI, PET, X-Ray, ultrasound, mammography, OCT, endoscopy, fundus, dermoscopy, and microscopy.
-
Train
- Distill encoder
cd distill python train_distill.py
- Only train decoder
python train.py
- Train encoder & decoder together
python train.py -freeze False
-
Docker Infer
- Load the docker
docker load -i lkeb.tar.gz
- Run the docker infer case in
$PWD/imgs
docker container run -m 8G --name lkeb --rm -v $PWD/imgs/:/workspace/inputs/ -v $PWD/lkeb/:/workspace/outputs/ lkeb:latest /bin/bash -c "sh predict.sh"
- if you run into Permission denied error.
chmod -R 777 ./*