Home

Awesome

<p align="center">MedficientSAM: A Robust Medical Segmentation Model with Optimized Inference Pipeline for Limited Clinical Settings</p>

<p align="center"> <img height="250" alt="screen" src="assets/architecture.png"> <img height="250" alt="screen" src="assets/qualitative.png"> </p>

Getting Started

git clone --recursive https://github.com/hieplpvip/medficientsam.git
cd medficientsam
conda env create -f environment.yaml -n medficientsam
conda activate medficientsam

Environment and Requirements

SystemUbuntu 22.04.5 LTS
CPUAMD EPYC 7742 64-Core Processor
RAM256GB
GPU (number and type)One NVIDIA A100 40G
CUDA version12.0
Programming languagePython 3.10
Deep learning frameworktorch 2.2.2, torchvision 0.17.2

Results

Accuracy metrics are evaluated on the public validation set of CVPR 2024 Segment Anything In Medical Images On Laptop Challenge. The computational metrics are obtained on an Intel(R) Core(TM) i9-10900K.

ModelResolutionParamsFLOPsDSCNSD2D Run Time3D Run Time2D Memory Usage2D Memory Usage
MedficientSAM-L0512x51234.79M36.80G85.85%87.05%0.9s7.4s448MB687MB
MedficientSAM-L1512x51247.65M51.05G86.42%87.95%1.0s9.0s553MB793MB
MedficientSAM-L2512x51261.33M70.71G86.08%87.53%1.1s11.1s663MB903MB

Reproducibility

The Docker images can be found here.

docker load -i seno.tar.gz
docker container run -m 8G --name seno --rm -v $PWD/test_input/:/workspace/inputs/ -v $PWD/test_output/:/workspace/outputs/ seno:latest /bin/bash -c "sh predict.sh"

To measure the running time (including Docker starting time), see https://github.com/bowang-lab/MedSAM/blob/LiteMedSAM/CVPR24_time_eval.py

Data Preparation

You need to participate in the challenge to access the dataset. After downloading it, copy .env.example to .env and modify CVPR2024_MEDSAM_DATA_DIR to correct path.

The directory structure should look like this:

CVPR24-MedSAMLaptopData
├── train_npz
│   ├── CT
│   ├── Dermoscopy
│   ├── Endoscopy
│   ├── Fundus
│   ├── Mammography
│   ├── Microscopy
│   ├── MR
│   ├── OCT
│   ├── PET
│   ├── US
│   └── XRay
├── validation-box
    └── imgs

Training

See train_scripts.

Inference

You can download the weights here.

To run MedficientSAM-L1 on the validation set:

python src/infer.py experiment=infer_finetuned_l1

See configs/experiment/infer_* for running other model variants.

Build Docker image

Export model

python src/export_onnx.py experiment=export_finetuned_l0_cpp output_dir=weights/finetuned-l1-augmented/e2_cpp

Build

docker build -f Dockerfile.cpp -t seno.fat .
slim build --target seno.fat --tag seno --http-probe=false --include-workdir --mount $PWD/test_input/:/workspace/inputs/ --mount $PWD/test_output/:/workspace/outputs/ --exec "sh predict.sh"
docker save seno | gzip -c > seno.tar.gz

References

<!-- ## Citation If MedficientSAM is useful or relevant to your research, please kindly recognize our contributions by citing our paper: ``` TBU ``` -->