Home

Awesome

SAM & SAM 2 for Medical Image Segmentation.

@article{SAM4MIS,
  title={Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions},
  author={Zhang, Yichi and Shen, Zhenrong and Jiao, Rushi},
  journal={Computers in Biology and Medicine},
  volume={171},
  pages={108238},
  year={2024}
}

@article{SAM2-MIS,
  title={Unleashing the Potential of SAM2 for Biomedical Images and Videos: A Survey},
  author={Zhang, Yichi and Shen, Zhenrong},
  journal={arXiv preprint arXiv:2408.12889},
  year={2024}
}

Table of Contents

About Segment Anything Model (SAM) <div id="introduction"></div>

Segment Anything Model (SAM) uses vision transformer-based image encoder to extract image features and compute an image embedding, and prompt encoder to embed prompts and incorporate user interactions. Then extranted information from two encoders are combined to alightweight mask decoder to generate segmentation results based on the image embedding, prompt embedding, and output token. For more details, please refer to the original paper of SAM.

image

image

A brief chronology of Segment Anything Model (SAM) and its variants for medical image segmentation in 2023.

image

Literature Reviews of SAM 2 Adaptions for Medical Image Segmentation. <div id="sam24mis"></div>

DateAuthorsTitleCode
202408M. Mansoori et al.Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model (paper)Code
202408X. Chen et al.SAM-OCTA2: Layer Sequence OCTA Segmentation with Fine-tuned Segment Anything Model 2 (paper)Code
202408L. Zhao et al.Retrieval-augmented Few-shot Medical Image Segmentation with Foundation Models (paper)None
202408Z. Yildiz et al.SAM & SAM 2 in 3D Slicer: SegmentWithSAM Extension for Annotating Medical Images (paper)Code
202408Y. He et al.A Short Review and Evaluation of SAM2’s Performance in 3D CT Image Segmentation (paper)Code
202408X. Xiong et al.SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation (paper)Code
202408H. Liu et al.Surgical SAM 2: Real-time Segment Anything in Surgical Video by Efficient Frame Pruning (paper)Code
202408Y. Yamagishi et al.Zero-shot 3D Segmentation of Abdominal Organs in CT Scans Using Segment Anything Model 2: Adapting Video Tracking Capabilities for 3D Medical Imaging (paper)None
202408M. Mansoori et al.Polyp SAM 2: Advancing Zero shot Polyp Segmentation in Colorectal Cancer Detection (paper)Code
202408AS. Yu et al.Novel adaptation of video segmentation to 3D MRI: efficient zero-shot knee segmentation with SAM2 (paper)None
202408J. Yu et al.SAM 2 in Robotic Surgery: An Empirical Evaluation for Robustness and Generalization in Surgical Video Segmentation (paper)None
202408T. Chen et al.SAM2-Adapter: Evaluating & Adapting Segment Anything 2 in Downstream Tasks: Camouflage, Shadow, Medical Image Segmentation, and More (paper)None
202408S. Sengupta et al.Is SAM 2 Better than SAM in Medical Image Segmentation? (paper)None
202408Y. Shen et al.Performance and Non-adversarial Robustness of the Segment Anything Model 2 in Surgical Video Segmentation (paper)None
202408M. Zhang et al.SAM2-PATH: A better segment anything model for semantic segmentation in digital pathology (paper)Code
202408J. Ma et al.Segment Anything in Medical Images and Videos: Benchmark and Deployment (paper)Code
202408Z. Yan et al.Biomedical SAM 2: Segment Anything in Biomedical Images and Videos (paper)None
202408C. Shen et al.Interactive 3D Medical Image Segmentation with SAM 2 (paper)Code
202408A. Lou et al.Zero-Shot Surgical Tool Segmentation in Monocular Video Using Segment Anything Model 2 (paper)Code
202408J. Zhu et al.Medical SAM 2: Segment medical images as video via Segment Anything Model 2 (paper)Code
202408H. Dong et al.Segment anything model 2: an application to 2D and 3D medical images (paper)None

Literature Reviews of Foundation Models / SAM for Medical Image Segmentation. <div id="sam4mis"></div>

DateAuthorsTitleCode
202409H. Wang et al.Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images (paper)Code
202409AS. Wahd et al.Sam2Rad: A Segmentation Model for Medical Images with Learnable Prompts (paper)Code
202409Y. Liu et al.When 3D Partial Points Meets SAM: Tooth Point Cloud Segmentation with Sparse Labels (paper)Code
202409X. Zheng et al.Curriculum Prompting Foundation Models for Medical Image Segmentation (paper)Code
202408S. Kato et al.Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes (paper)Code
202407C. Zhou et al.SAM-SP: Self-Prompting Makes SAM Great Again (paper)None
202408S. Yang et al.SAM-UNet: Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images (paper)Code
202408J. Wei et al.SAM-FNet: SAM-Guided Fusion Network for Laryngo-Pharyngeal Tumor Detection (paper)Code
202408X. Wei et al.PromptSAM+: Malware Detection based on Prompt Segment Anything Model (paper)Code
202407J. Cai et al.PESAM: Privacy-Enhanced Segment Anything Model for Medical Image Segmentation (paper)None
202407M. Asokan et al.A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation (paper)Code
202407SN. Gowda et al.CC-SAM: SAM with Cross-feature Attention and Context for Ultrasound Image Segmentation(paper)None
202407X. Huo et al.Dr-SAM: U-Shape Structure Segment Anything Model for Generalizable Medical Image Segmentation (paper)None
202407H. Fang et al.SAM-MIL: A Spatial Contextual Aware Multiple Instance Learning Approach for Whole Slide Image Classification (paper)None
202407Q. Xu et al.ESP-MedSAM: Efficient Self-Prompting SAM for Universal Domain-Generalized Medical Image Segmentation (paper)Code
202407X. Zhao et al.SAM-Driven Weakly Supervised Nodule Segmentation with Uncertainty-Aware Cross Teaching (paper)None
202407Q. Xu et al.ProtoSAM: One Shot Medical Image Segmentation With Foundational Models (paper)Code
202407A. Murali et al.CycleSAM: One-Shot Surgical Scene Segmentation using Cycle-Consistent Feature Matching to Prompt SAM (paper)None
202407T. Song et al.TinySAM-Med3D: A Lightweight Segment Anything Model for Volumetric Medical Imaging with Mixture of Experts (paper)None
202407Y. Gao et al.MBA-Net: SAM-driven Bidirectional Aggregation Network for Ovarian Tumor Segmentation (paper)None
202407J. Miao et al.Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation (paper)Code
202407G. Wang et al.SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation (paper)None
202407Z. Zhang et al.Quantification of cardiac capillarization in basement-membrane-immunostained myocardial slices using Segment Anything Model (paper)None
202407H. Li et al.ASPS: Augmented Segment Anything Model for Polyp Segmentation (paper)Code
202406Y. Xie et al.SimTxtSeg: Weakly-Supervised Medical Image Segmentation with Simple Text Cues (paper)None
202406X. Deng et al.MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation (paper)Code
202406Yunhe GaoTraining Like a Medical Resident: Context-Prior Learning Toward Universal Medical Image Segmentation (paper)Code
202406C.D Albelda et al.How SAM Perceives Different mp-MRI Brain Tumor Domains? (paper)Code
202406T. Huang et al.Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation (paper)Code
202406B. Towle et al.SimSAM: Zero-shot Medical Image Segmentation via Simulated Interaction (paper)Code
202405Y. Gu et al.LeSAM: Adapt Segment Anything Model for medical lesion segmentation (paper)None
202405J. Leng et al.Development of UroSAM: A Machine Learning Model to Automatically Identify Kidney Stone Composition from Endoscopic Video (paper)None
202405MM. Rahman et al.PP-SAM: Perturbed Prompts for Robust Adaptation of Segment Anything Model for Polyp Segmentation (paper)Code
202405X. Zhang et al.A Foundation Model for Brain Lesion Segmentation with Mixture of Modality Experts (paper)Code
202405TJ. Chan et al.SAM3D: Zero-Shot Semi-Automatic Segmentation in 3D Medical Images with the Segment Anything Model (paper)None
202405HL. Zedda et al.SAMMI: Segment Anything Model for Malaria Identification (paper)None
202404H. Zhou et al.AGSAM: Agent-Guided Segment Anything Model for Automatic Segmentation in Few-Shot Scenarios (paper)None
202404V. Zohranyan et al.Dr-SAM: An End-to-End Framework for Vascular Segmentation, Diameter Estimation, and Anomaly Detection on Angiography Images (paper)Code
202404Z. Tu et al.Ultrasound SAM Adapter: Adapting SAM for Breast Lesion Segmentation in Ultrasound Images (paper)Code
202404Y. Sheng et al.Surgical-DeSAM: Decoupling SAM for Instrument Segmentation in Robotic Surgery (paper)None
202404J. Yu et al.Adapting SAM for Surgical Instrument Tracking and Segmentation in Endoscopic Submucosal Dissection Videos (paper)None
202404H. Gu et al.How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model (paper)Code
202404W. Abebe et al.SAM-I-Am: Semantic Boosting for Zero-shot Atomic-Scale Electron Micrograph Segmentation (paper)None
202404S. Aleem et al.Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero-shot Medical Image Segmentation (paper)Code
202404Z. Su et al.Adapting SAM to histopathology images for tumor bud segmentation in colorectal cancer (paper)None
202404Y. Ding et al.Barely-supervised Brain Tumor Segmentation via Employing Segment Anything Model (paper)None
202404Y. Zhu et al.SAM-Att: A Prompt-free SAM-related Model with an Attention Module for Automatic Segmentation of the Left Ventricle in Echocardiography (paper)None
202404Y. Liu et al.Universal 3D CT lesion segmentation using SAM with RECIST annotation (paper)None
202403Z. Cheng et al.Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding (paper)Code
202403Y. Liu et al.Segment Any Medical Model Extended (paper)None
202403P. Kulkarni et al.Anytime, Anywhere, Anyone: Investigating the Feasibility of Segment Anything Model for Crowd-Sourcing Medical Image Annotations (paper)None
202403H. Guo et al.Towards a Comprehensive, Efficient and Promptable Anatomic Structure Segmentation Model using 3D Whole-body CT Scans (paper)None
202403S. Li et al.Concatenate, Fine-tuning, Re-training: A SAM-enabled Framework for Semi-supervised 3D Medical Image Segmentation (paper)Code
202403M. Jiang et al.Uncertainty-Aware Adapter: Adapting Segment Anything Model (SAM) for Ambiguous Medical Image Segmentation (paper)None
202403Z. Chen et al.Cardiac Magnetic Resonance 2D+T Short- and Long-axis Segmentation via Spatio-temporal SAM Adaptation (paper)None
202403Y. Shen et al.FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images (paper)Code
202403H. Liu et al.WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images (paper)Code
202403YX. Teoh et al.Segmentation of Knee Bones for Osteoarthritis Assessment: A Comparative Analysis of Supervised, Few-Shot, and Zero-Shot Learning Approaches (paper)None
202403Y. Wang et al.SAMDA: Leveraging SAM on Few-Shot Domain Adaptation for Electronic Microscopy Segmentation (paper)None
202403Y. Liu et al.FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation (paper)Code
202403C. Zhao et al.Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation (paper)None
202403J. Wang et al.ProMISe: Promptable Medical Image Segmentation using SAM (paper)None
202402L. Zhang et al.BLO-SAM: Bi-Level Optimization Based Finetuning of the Segment Anything Model for Overfitting-Preventing Semantic Segmentation (paper)Code
202402KJ. Oguine et al.From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments (paper)None
202402J. Ren et al.Segment anything model for head and neck tumor segmentation with CT, PET and MRI multi-modality images (paper)None
202402Z. Chen et al.UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images (paper)Code
202402H. Wu et al.Tumor segmentation on whole slide images: training or prompting? (paper)None
202402P. Farmanifard et al.Iris-SAM: Iris Segmentation Using a Foundational Model (paper)None
202402A. Guo et al.ClickSAM: Fine-tuning Segment Anything Model using click prompts for ultrasound image segmentation (paper)None
202401J. Wan et al.TriSAM: Tri-Plane SAM for zero-shot cortical blood vessel segmentation in VEM images (paper)None
202401S. Na et al.Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for Nuclei Segmentation (paper)None
202401H. Gu et al.SegmentAnyBone: A Universal Model that Segments Any Bone at Any Location on MRI (paper)Code
202401S. Li et al.ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation (paper)Code
202401JD. Gutiérrez et al.No More Training: SAM's Zero-Shot Transfer Capabilities for Cost-Efficient Medical Image Segmentation(paper)None
202401H. Wang et al.Leveraging SAM for Single-Source Domain Generalization in Medical Image Segmentation (paper)Code
202401Z. Feng et al.Swinsam: Fine-Grained Polyp Segmentation in Colonoscopy Images Via Segment Anything Model Integrated with a Swin Transformer Decoder (paper)None
202312Z. Zhao et al.One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts (paper)Code
202312W. Yue et al.Part to Whole: Collaborative Prompting for Surgical Instrument Segmentation (paper)Code
202312ZM. Colbert et al.Repurposing Traditional U-Net Predictions for Sparse SAM Prompting in Medical Image Segmentation (paper)None
202312W. Xie et al.SAM Fewshot Finetuning for Anatomical Segmentation in Medical Images (paper)None
202312JG. Almeida et al.Testing the Segment Anything Model on radiology data (paper)None
202312M. Barakat et al.Towards SAMBA: Segment Anything Model for Brain Tumor Segmentation in Sub-Sharan African Populations (paper)None
202312Y. Zhang et al.SQA-SAM: Segmentation Quality Assessment for Medical Images Utilizing the Segment Anything Model (paper)Code
202312S. Chen et al.ASLseg: Adapting SAM in the Loop for Semi-supervised Liver Tumor Segmentation (paper)None
202312HE. Wong et al.ScribblePrompt: Fast and Flexible Interactive Segmentation for Any Medical Image (paper)Code
202312Y. Zhang et al.SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image Segmentation with Extremely Limited Annotations (paper)None
202312Y. Zhao et al.Segment Anything Model-guided Collaborative Learning Network for Scribble-supervised Polyp Segmentation (paper)None
202311N. Li et al.Segment Anything Model for Semi-Supervised Medical Image Segmentation via Selecting Reliable Pseudo-Labels (paper)None
202311X. Wei et al.I-MedSAM: Implicit Medical Image Segmentation with Segment Anything (paper)None
202311Z. Shui et al.Unleashing the Power of Prompt-driven Nucleus Instance Segmentation (paper)Code
202311M. Li and G. Yang et al.Where to Begin? From Random to Foundation Model Instructed Initialization in Federated Learning for Medical Image Segmentation (paper)None
202311AK. Tyagi et al.Guided Prompting in SAM for Weakly Supervised Cell Segmentation in Histopathological Images (paper)Code
202311Y. Du et al.SegVol: Universal and Interactive Volumetric Medical Image Segmentation (paper)Code
202311DM. Nguyen et al.On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation (paper)None
202311U. Israel et al.A Foundation Model for Cell Segmentation (paper)Code
202311Q. Quan et al.Slide-SAM: Medical SAM Meets Sliding Window (paper)None
202311Y. Zhang et al.Segment Anything Model with Uncertainty Rectification for Auto-Prompting Medical Image Segmentation (paper)Code
202311Y. Wang et al.SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage Segmentation (paper)Code
202311H. Jiang et al.GlanceSeg: Real-time microangioma lesion segmentation with gaze map-guided foundation model for early detection of diabetic retinopathy (paper)None
202311Y. Xu et al.EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images (paper)None
202311DL. Ferreira and R. ArnaoutAre foundation models efficient for medical image segmentation? (paper)Code
202310H. Li et al.Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models (paper)Code
202310D. Anand et al.One-shot Localization and Segmentation of Medical Images with Foundation Models (paper)None
202310H. Wang et al.SAM-Med3D (paper)Code
202310SK. Kim et al.Evaluation and improvement of Segment Anything Model for interactive histopathology image segmentation (paper)Code
202310X. Chen et al.SAM-OCTA: Prompting Segment-Anything for OCTA Image Segmentation (paper)Code
202310M. Peivandi et al.Empirical Evaluation of the Segment Anything Model (SAM) for Brain Tumor Segmentation (paper)None
202310H. Ravishankar et al.SonoSAM - Segment Anything on Ultrasound Images (paper)None
202310A. Ranem et al.Exploring SAM Ablations for Enhancing Medical Segmentation in Radiology and Pathology (paper)None
202310S. Pandey et al.Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models (paper)None
202309Y. Li et al.nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance (paper)Code
202309Y. Zhao et al.MFS Enhanced SAM: Achieving Superior Performance in Bimodal Few-shot Segmentation (paper)Code
202309C. Wang et al.SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA Image Segmentation Tasks (paper)Code
202309Y. Zhang et al.3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images (paper)None
202309CJ. Chao et al.Comparative Eminence: Foundation versus Domain-Specific Model for Cardiac Ultrasound Segmentation (paper)None
202309H. Ning et al.An Accurate and Efficient Neural Network for OCTA Vessel Segmentation and a New Dataset (paper)Code
202309C. Chen et al.MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation (paper)Code
202309P. Zhang and Y. WangSegment Anything Model for Brain Tumor Segmentation (paper)None
202309B. Fazekas et al.Adapting Segment Anything Model (SAM) for Retinal OCT (paper)None
202309X. Lin et al.SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation (paper)Code
202309X. Xing et al.SegmentAnything helps microscopy images based automatic and quantitative organoid detection and analysis (paper)Code
202309NT. Bui et al.SAM3D: Segment Anything Model in Volumetric Medical Images (paper)Code
202308Y. Zhang et al.Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation with Meta-Learning (paper)None
202308J. Cheng et al.SAM-Med2D (paper)Code
202308C. Li et al.Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation (paper)None
202308W. Feng et al.Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few Exemplars (paper)None
202308Y. Zhang et al.SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation (paper)None
202308A. Lou et al.SAMSNeRF: Segment Anything Model (SAM) Guides Dynamic Surgical Scene Reconstruction by Neural Radiance Field (NeRF) (paper)Code
202308A. Archit et al.Segment Anything for Microscopy (paper)Code
202308X. Yao et al.False Negative/Positive Control for SAM on Noisy Medical Images (paper)Code
202308B. Fazekas et al.SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT (paper)None
202308W. Yue et al.SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation (paper)Code
202308H. Zhang et al.CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark Model for Rectal Cancer Segmentation (paper)Code
202308Q. Wu et al.Self-Prompting Large Vision Models for Few-Shot Medical Image Segmentation (paper)Code
202308A. Wang et al.SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation (paper)None
202308D. Shin et al.CEmb-SAM: Segment Anything Model with Condition Embedding for Joint Learning from Heterogeneous Datasets (paper)None
202308R. BiswasPolyp-SAM++: Can A Text Guided SAM Perform Better for Polyp Segmentation? (paper)Code
202308S. Cao et al.TongueSAM: An Universal Tongue Segmentation Model Based on SAM with Zero-Shot (paper)Code
202308X. Li et al.Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot Segment Anything Model for Molecular-empowered Learning (paper)None
202308JN. Paranjape et al.AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene Segmentation (paper)Code
202308Z. Huang et al.Push the Boundary of SAM: A Pseudo-label Correction Framework for Medical Segmentation (paper)None
202307J. Zhang et al.SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital Pathology (paper)Code
202307MS. Hossain et al.Robust HER2 Grading of Breast Cancer Patients using Zero-shot Segment Anything Model (SAM) (paper)None
202307C. Wang et al.SAM^Med^ : A medical image annotation framework based on large vision model (paper)None
202307G. Deng et al.SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image (paper)None
202307H. Kim et al.Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging (paper)None
202307X. Shi et al.Cross-modality Attention Adapter: A Glioma Segmentation Fine-tuning Method for SAM Using Multimodal Brain MR Images (paper)None
202307C. Cui et al.All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning (paper)None
202306E. Kellener et al.Utilizing Segment Anything Model for Assessing Localization of Grad-CAM in Medical Imaging (paper)None
202306F. Hörst et al.CellViT: Vision Transformers for Precise Cell Segmentation and Classification (paper)Code
202306W. Lei et al.MedLSAM: Localize and Segment Anything Model for 3D Medical Images (paper)Code
202306X. Hu et al.How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images (paper)Code
202306S. Gong et al.3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation (paper)Code
202306DMH. Nguyen et al.LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching (paper)Code
202306S. Chai et al.Ladder Fine-tuning approach for SAM integrating complementary network (paper)Code
202306L. Zhang et al.Segment Anything Model (SAM) for Radiation Oncology (paper)None
202306G. Ning et al.The potential of 'Segment Anything' (SAM) for universal intelligent ultrasound image guidance (paper)None
202306C. Shen et al.Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation (paper)None
202306T. Shaharabany et al.AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt Encoder (paper)None
202306Y. Gao et al.DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation (paper)Code
202305D. Lee et al.IAMSAM : Image-based Analysis of Molecular signatures using the Segment-Anything Model (paper)Code
202305M. Hu et al.BreastSAM: A Study of Segment Anything Model for Breast Tumor Detection in Ultrasound Images (paper)None
202305J. WuPromptUNet: Toward Interactive Medical Image Segmentation (paper)Code
202305Y. Li et al.Polyp-SAM: Transfer SAM for Polyp Segmentation (paper)Code
202305C. Mattjie et al.Exploring the Zero-Shot Capabilities of the Segment Anything Model (SAM) in 2D Medical Imaging: A Comprehensive Evaluation and Practical Guideline (paper)None
202305D. Cheng et al.SAM on Medical Images: A Comprehensive Study on Three Prompt Modes (paper)None
202304A. Wang et al.SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective (paper)None
202304Y. Huang et al.Segment Anything Model for Medical Images? (paper)None
202304M. Hu et al.SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model (paper)None
202304B. Wang et al.GazeSAM: What You See is What You Segment (paper)Code
202304K. Zhang and D. LiuCustomized Segment Anything Model for Medical Image Segmentation (paper)Code
202304Z. Qiu et al.Learnable Ophthalmology SAM (paper)Code
202304P. Shi et al.Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation (paper)None
202304J. Wu et al.Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation (paper)Code
202304J. Ma and B. WangSegment Anything in Medical Images (paper)Code
202304Y. Zhang et al.Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model (paper)None
202304MA. Mazurowski et al.Segment Anything Model for Medical Image Analysis: an Experimental Study (paper)Code
202304S. He et al.Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks (paper)None
202304T. Chen et al.SAM Fails to Segment Anything? – SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More (paper)Code
202304C. Hu and X. LiWhen SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation (paper)None
202304F. Putz et al.The “Segment Anything” foundation model achieves favorable brain tumor autosegmentation accuracy on MRI to support radiotherapy treatment planning (paper)None
202304T. Zhou et al.Can SAM Segment Polyps? (paper)Code
202304Y. Liu et al.SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM (paper)Code
202304S. Roy et al.SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model (paper)None
202304S. Mohapatra et al.SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning (paper)None
202304R. Deng et al.Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging (paper)None

Large-Scale Datasets for Developing Medical Foundation Models.<div id="dataset"></div>

DateAuthorsTitleDataset
202404F. Bai et al.M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models (paper)Link
202311J. Ye et al.SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks (paper)Link

CVPR2024 Workshop: Segment Anything in Medical Images on Laptop.<div id="cvpr24"></div>

(Challenge Website) (Papers)

The field of medical image segmentation is currently experiencing a paradigm shift, moving from specialized models designed for individual tasks to foundation models capable of managing a multitude of segmentation scenarios. This challenge seeks universal promptable medical image segmentation models that are deployable on laptops or other edge devices without reliance on GPUs.