Home

Awesome

🐍MedMamba: Vision Mamba for Medical Image Classification🐍

This is the official code repository for "MedMamba: Vision Mamba for Medical Image Classification"arXiv medmamba

🎇 Good News 🎇

Now, everyone can download the pre-trained weights of MedMamba, including our private datasets (password: 2024).If you encounter any difficulties during the download process, please do not hesitate to contact me.💖

figure4

📝Work Summary📝

Since the era of deep learning, convolutional neural networks (CNNs) and vision transformers (ViTs) have been extensively studied and widely used in medical image classification tasks. Unfortunately, CNN's limitations in modeling long-range dependencies result in poor classification performances. In contrast, ViTs are hampered by the quadratic computational complexity of their self-attention mechanism, making them difficult to deploy in real-world settings with limited computational resources. Recent studies have shown that state space models (SSMs) represented by Mamba can effectively model long-range dependencies while maintaining linear computational complexity. Inspired by it, we proposed MedMamba, the first vision Mamba for generalized medical image classification. Concretely, we introduced a novel hybrid basic block named SS-Conv-SSM, which purely integrates the convolutional layers for extracting local features with the abilities of SSM to capture long-range dependencies, aiming to model medical images from different image modalities efficiently. By employing the grouped convolution strategy and channel-shuffle operation, MedMamba successfully provides fewer model parameters and a lower computational burden for efficient applications without sacrificing accuracy. To demonstrate the potential of MedMamba, we conducted extensive experiments using 16 datasets containing ten imaging modalities and 411,007 images. Experimental results show that the proposed MedMamba demonstrates competitive performance in classifying various medical images compared with the state-of-the-art methods. Our work is aims to establish a new baseline for medical image classification and provide valuable insights for developing more powerful SSM-based artificial intelligence algorithms and application systems in the medical field. Medmamba_net_new_01(1) S6_BLOCK

📌Installation📌

📜Other requirements📜:

🔥The classification performance of MedMamba🔥

Since MedMamba is suitable for most medical images, you can try applying it to advanced tasks (such as multi-label classification, medical image segmentation, and medical object detection). In addition, we are testing MedMamba with different parameter sizes. dataset-new

DatasetTaskPrecisionSensitivitySpecificityF1-scoreOverall AccuracyAUCModel Weight
PAD-UFES-20Multi-Class(6)38.436.989.935.858.80.808weights
Cervical-USMulti-Class(4)81.276.294.978.086.20.952weights
Fetal-Planes-DBMulti-Class(6)92.293.998.793.094.00.993weights
CPN X-rayMulti-Class(3)97.297.298.597.297.10.995weights
KvasirMulti-Class(8)78.778.897.078.678.80.973weights
Otoscopy2024Multi-Class(9)86.084.498.685.289.50.989weights
PathMNISTMulti-Class(9)94.094.799.494.295.30.997weights
DermaMNISTMulti-Class(7)67.350.193.651.677.90.917weights
OCTMNISTMulti-Class(4)92.891.897.391.891.80.992weights
PneumoniaMNISTMulti-Class(2)92.187.087.088.689.90.965weights
RetinaMNISTMulti-Class(5)35.937.787.536.154.30.747weights
BreastMNISTMulti-Class(2)91.672.672.676.685.30.825weights
BloodMNISTMulti-Class(8)97.797.799.797.797.80.999weights
OrganAMNISTMulti-Class(11)94.493.399.593.894.60.998weights
OrganCMNISTMulti-Class(11)92.291.699.391.792.70.997weights
OrganSMNISTMulti-Class(11)78.077.498.276.381.90.982weights

💞Citation💞

If you find this repository useful, please consider the following references. We would greatly appreciate it.

@article{yue2024medmamba,
  title={MedMamba: Vision Mamba for Medical Image Classification},
  author={Yue, Yubiao and Li, Zhenzhang},
  journal={arXiv preprint arXiv:2403.03849},
  year={2024}
}

✨Acknowledgments✨

We thank the authors of VMamba, Swin-UNet and VM-UNet for their open-source codes.

📊Datasets📊

Kvasir

The data is collected using endoscopic equipment at Vestre Viken Health Trust (VV) in Norway. The VV consists of 4 hospitals and provides health care to 470.000 people. One of these hospitals (the Bærum Hospital) has a large gastroenterology department from where training data have been collected and will be provided, making the dataset larger in the future. Furthermore, the images are carefully annotated by one or more medical experts from VV and the Cancer Registry of Norway (CRN). The CRN provides new knowledge about cancer through research on cancer. It is part of South-Eastern Norway Regional Health Authority and is organized as an independent institution under Oslo University Hospital Trust. CRN is responsible for the national cancer screening programmes with the goal to prevent cancer death by discovering cancers or pre-cancerous lesions as early as possible.Kavsir Dataset imgs_03

Cervical lymph node lesion ultrasound images (Cervical-US)

CLNLUS is a private dataset containing 3392 cervical lymph node ultrasound images. Specifically, these images were obtained from 480 patients in the Ultrasound Department of the Second Affiliated Hospital of Guangzhou Medical University. The entire dataset is divided into four categories by clinical experts based on pathological biopsy results: normal lymph nodes (referred to as normal), benign lymph nodes (referred to as benign), malignant primary lymph nodes (referred to as primary), and malignant metastatic lymph nodes (referred to as metastatic). The number of normal, benign, primary and metastatic images are 1217, 601, 236 and 1338 respectively.imgs_01

FETAL_PLANES_DB: Common maternal-fetal ultrasound images (Fetal-Planes-DB)

A large dataset of routinely acquired maternal-fetal screening ultrasound images collected from two different hospitals by several operators and ultrasound machines. All images were manually labeled by an expert maternal fetal clinician. Images are divided into 6 classes: four of the most widely used fetal anatomical planes (Abdomen, Brain, Femur and Thorax), the mother’s cervix (widely used for prematurity screening) and a general category to include any other less common image plane. Fetal brain images are further categorized into the 3 most common fetal brain planes (Trans-thalamic, Trans-cerebellum, Trans-ventricular) to judge fine grain categorization performance. Based on FETAL's metadata, we categorize it into six categories. The number of images for each category is as follows: Fetal abdomen (711 images), Fetal brain (3092 images), Fetal femur (1040 images), Fetal thorax (1718 images), Maternal cervis (1626 images), and Other (4213 images). Dataset URL imgs_04

Covid19-Pneumonia-Normal Chest X-Ray Images (CPN X-ray)

Shastri et al collected a large number of publicly available and domain recognized X-ray images from the Internet, resulting in CPN-CX. The CPN-CX dataset is divided into 3 categories, namely COVID, NORMAL and PNEUMONIA. All images are preprocessed and resized to 256x256 in PNG format. It helps the researcher and medical community to detect and classify COVID19 and Pneumonia from Chest X-Ray Images using Deep Learning Dataset URL.imgs_02

Large-scale otoscopy dataset (Otoscopy2024)

This dataset is a supplement to previous work. In previous publications, we collected 20542 endoscopic images of ear infections. On this basis, we have added an additional 2039 images from medical institutions. We will name 22581 endoscopic images of the ear as Otoscopy2024. Otoscopy2024 is a large dataset specifically designed for ear disease classification, consisting of 9 categories: Cholestestoma of middle ear(548 images), Chronic suppurative otitis media(4021 images), External auditory cana bleeding (451 images), Impacted cerumen (6058 images), Normal eardrum (4685 images), Otomycosis external (2507 images), Secretory otitis media (2720 images), Tympanic membrane calcification (1152 images), Acute otitis media (439 images). imgs_05