Home

Awesome

IMIS-Benchmark

This repository hosts the code and resources for the paper "Interactive Medical Image Segmentation: A Benchmark Dataset and Baseline".

[Homepage] [Paper] [Demo] [Model] [Data]

We collected 110 medical image datasets from various sources and generated the IMed-361M dataset, which contains over 361 million masks, through a rigorous and standardized data processing pipeline. Using this dataset, we developed the IMIS baseline network.

<p align="center"> <img width="1000" alt="image" src="https://github.com/uni-medical/IMIS-Bench/blob/main/assets/fig1.png"> </p>

πŸ‘‰ IMIS Benchmark Dataset: IMed-361M

The IMed-361M dataset is the largest publicly available multimodal interactive medical image segmentation dataset, featuring 6.4 million images, 273.4 million masks (56 masks per image), 14 imaging modalities, and 204 segmentation targets. It ensures diversity across six anatomical groups, fine-grained annotations with most masks covering <2% of the image area, and broad applicability with 83% of images in resolutions between 256Γ—256 and 1024Γ—1024. IMed-361M offers 14.4 times more masks than MedTrinity-25M, significantly surpassing other datasets in scale and mask quantity.

<p align="center"><img width="800" alt="image" src="https://github.com/uni-medical/IMIS-Bench/blob/main/assets/fig2.png"></p>

πŸ‘‰ IMIS Network

We simulate continuous interactive segmentation training.

<p align="center"><img width="800" alt="image" src="https://github.com/uni-medical/IMIS-Bench/blob/main/assets/fig4.png"></p>

πŸ‘‰ Installation

git clone https://github.com/uni-medical/IMIS-Bench.git

πŸ‘‰ Environment Setup

The recommended operating environment is as follows:

PackageVersionPackageVersion
CUDA11.8timm0.9.16
Huggingface-Hub0.23.4transformers4.39.3
nibabel5.2.1monai0.9.1
Python3.8.19opencv-python4.10.0
PyTorch2.2.1torchvision0.17.2

πŸ‘‰ Datasets

IMed-361 was created by preprocessing a combination of private and publicly available medical image segmentation datasets. The dataset will be made available on HuggingFace. For detailed information about the source datasets, please refer to our paper. To help you get started quickly, we have provided a small sample demonstration IMIS-Bench/dataset from IMed-361.

dataset
β”œβ”€β”€ BTCV
β”‚    β”œβ”€ image
β”‚    β”‚    β”œβ”€β”€ xxx.png
β”‚    β”‚    β”œβ”€β”€ ....
β”‚    β”‚    β”œβ”€β”€ xxx.png
β”‚    β”œβ”€β”€ label
β”‚    β”‚    β”œβ”€β”€ xxx.npz
β”‚    β”‚    β”œβ”€β”€ ....
β”‚    β”‚    β”œβ”€β”€ xxx.npz
β”‚    β”œβ”€β”€ imask
β”‚    β”‚    β”œβ”€β”€ xxx.npy
β”‚    β”‚    β”œβ”€β”€ ....
β”‚    β”‚    β”œβ”€β”€ xxx.npy
β”‚    └── dataset.json

πŸ‘‰ Model Checkpoints

We host our model checkpoints on Baidu Netdisk: https://pan.baidu.com/s/1eCuHs3qhd1lyVGqUOdaeFw?pwd=r1pg, Password:r1pg

Please download the checkpoint from Baidu Netdisk and place them under "ckpt/".

πŸ‘‰ Train IMIS-Net

To train the IMIS-Net, run:

cd IMIS-Bench
python train.py

πŸ‘‰ Evaluate IMIS-Net

To evaluate the IMIS-Net, run:

python test.py

πŸ‘‰ Citation

Please cite our paper if you use the code, model, or data.

@article{cheng2024interactivemedicalimagesegmentation,
      title={Interactive Medical Image Segmentation: A Benchmark Dataset and Baseline}, 
      author={Junlong Cheng and Bin Fu and Jin Ye and Guoan Wang and Tianbin Li and Haoyu Wang and Ruoyu Li and He Yao and Junren Chen and JingWen Li and Yanzhou Su and Min Zhu and Junjun He},
      year={2024},
      eprint={2411.12814},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.12814}, 
}