Home

Awesome

<div align="center">

logo
Subscribe us: https://groups.google.com/u/2/g/bodymaps

</div>

We developed a suite of pre-trained 3D models, named SuPreM, that combined the best of large-scale datasets and per-voxel annotations, showing the transferability across a range of 3D medical imaging tasks.

Paper

<b>AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking</b> <br/> Wenxuan Li, Chongyu Qu, Xiaoxi Chen, Pedro R. A. S. Bassi, Yijia Shi, Yuxiang Lai, Qian Yu, Huimin Xue, Yixiong Chen, Xiaorui Lin, Yutong Tang, Yining Cao, Haoqi Han, Zheyuan Zhang, Jiawei Liu, Tiezheng Zhang, Yujiu Ma, Jincheng Wang, Guang Zhang, Alan Yuille, Zongwei Zhou* <br/> Johns Hopkins University <br/> Medical Image Analysis, 2024 <br/> <a href='https://www.zongweiz.com/dataset'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://www.cs.jhu.edu/~alanlab/Pubs24/li2024abdomenatlas.pdf'><img src='https://img.shields.io/badge/Paper-PDF-purple'></a>

<b>How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?</b> <br/> Wenxuan Li, Alan Yuille, and Zongwei Zhou<sup>*</sup> <br/> Johns Hopkins University <br/> International Conference on Learning Representations (ICLR) 2024 (oral; top 1.2%) <br/> <a href='https://www.zongweiz.com/dataset'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://www.cs.jhu.edu/~alanlab/Pubs23/li2023suprem.pdf'><img src='https://img.shields.io/badge/Paper-PDF-purple'></a> <a href='document/promotion_slides.pdf'><img src='https://img.shields.io/badge/Slides-PDF-orange'></a> <a href='document/dom_wse_poster.pdf'><img src='https://img.shields.io/badge/Poster-PDF-blue'></a> YouTube <a href='https://www.cs.jhu.edu/news/ai-and-radiologists-unite-to-map-the-abdomen/'><img src='https://img.shields.io/badge/WSE-News-yellow'></a>

<b>Transitioning to Fully-Supervised Pre-Training with Large-Scale Radiology ImageNet for Improved AI Transferability in Three-Dimensional Medical Segmentation</b> <br/> Wenxuan Li<sup>1</sup>, Junfei Xiao<sup>1</sup>, Jie Liu<sup>2</sup>, Yucheng Tang<sup>3</sup>, Alan Yuille<sup>1</sup>, and Zongwei Zhou<sup>1,*</sup> <br/> <sup>1</sup>Johns Hopkins University <br/> <sup>2</sup>City University of Hong Kong <br/> <sup>3</sup>NVIDIA <br/> Radiological Society of North America (RSNA) 2023 <br/> <a href='document/rsna_abstract.pdf'><img src='https://img.shields.io/badge/Abstract-PDF-purple'></a> <a href='document/rsna_slides.pdf'><img src='https://img.shields.io/badge/Slides-PDF-orange'></a>

★ We have maintained a document for Frequently Asked Questions.

★ We have maintained a paper list for Awesome Medical SAM.

★ We have maintained a paper list for Awesome Medical Pre-Training.

★ We have maintained a paper list for Awesome Medical Segmentation Backbones.

An Extensive Dataset: AbdomenAtlas 1.1

The release of AbdomenAtlas 1.0 can be found at https://huggingface.co/datasets/AbdomenAtlas/AbdomenAtlas1.0Mini

AbdomenAtlas 1.1 is an extensive dataset of 9,262 CT volumes with per-voxel annotation of 25 organs and pseudo annotations for seven types of tumors, enabling us to finally perform supervised pre-training of AI models at scale. Based on AbdomenAtlas 1.1, we also provide a suite of pre-trained models comprising several widely recognized AI models.

<p align="center"><img width="100%" src="document/fig_benchmark.png" /></p>

Prelimianry benchmark showed that supervised pre-training strikes as a preferred choice in terms of performance and efficiency compared with self-supervised pre-training.

We anticipate that the release of large, annotated datasets (AbdomenAtlas 1.1) and the suite of pre-trained models (SuPreM) will bolster collaborative endeavors in establishing Foundation Datasets and Foundation Models for the broader applications of 3D volumetric medical image analysis.

The AbdomenAtlas 1.1 dataset is organized as

AbdomenAtlas1.1
    ├── BDMAP_00000001
    │   ├── ct.nii.gz
    │   └── segmentations
    │       ├── aorta.nii.gz
    │       ├── gall_bladder.nii.gz
    │       ├── kidney_left.nii.gz
    │       ├── kidney_right.nii.gz
    │       ├── liver.nii.gz
    │       ├── pancreas.nii.gz
    │       ├── postcava.nii.gz
    │       ├── spleen.nii.gz
    │       ├── stomach.nii.gz
    │       └── ...
    ├── BDMAP_00000002
    │   ├── ct.nii.gz
    │   └── segmentations
    │       ├── aorta.nii.gz
    │       ├── gall_bladder.nii.gz
    │       ├── kidney_left.nii.gz
    │       ├── kidney_right.nii.gz
    │       ├── liver.nii.gz
    │       ├── pancreas.nii.gz
    │       ├── postcava.nii.gz
    │       ├── spleen.nii.gz
    │       ├── stomach.nii.gz
    │       └── ...
    ├── BDMAP_00000003
    │   ├── ct.nii.gz
    │   └── segmentations
    │       ├── aorta.nii.gz
    │       ├── gall_bladder.nii.gz
    │       ├── kidney_left.nii.gz
    │       ├── kidney_right.nii.gz
    │       ├── liver.nii.gz
    │       ├── pancreas.nii.gz
    │       ├── postcava.nii.gz
    │       ├── spleen.nii.gz
    │       ├── stomach.nii.gz
    │       └── ...
    ...

A Suite of Pre-trained Models: SuPreM

The following is a list of supported model backbones in our collection. Select the appropriate family of backbones and click to expand the table, download a specific backbone and its pre-trained weights (name and download), and save the weights into ./pretrained_weights/. More backbones will be added along time. Please suggest the backbone in this channel if you want us to pre-train it on AbdomenAtlas 1.1 containing 9,262 annotated CT volumes.

<details> <summary style="margin-left: 25px;">Swin UNETR</summary> <div style="margin-left: 25px;">
nameparamspre-trained dataresourcesdownload
Tang et al.62.19M5050 CTGitHub starsweights
Jose Valanaras et al.62.19M50000 CT/MRIGitHub starsweights
Universal Model62.19M2100 CTGitHub starsweights
SuPreM62.19M2100 CTours :star2:weights
</div> </details> <details> <summary style="margin-left: 25px;">U-Net</summary> <div style="margin-left: 25px;">
nameparamspre-trained dataresourcesdownload
Models Genesis19.08M623 CTGitHub starsweights
UniMiSStiny5022 CT&MRIGitHub starsweights
small5022 CT&MRIweights
Med3D85.75M1638 CTGitHub starsweights
DoDNet17.29M920 CTGitHub starsweights
Universal Model19.08M2100 CTGitHub starsweights
SuPreM19.08M2100 CTours :star2:weights
</div> </details> <details> <summary style="margin-left: 25px;">SegResNet</summary> <div style="margin-left: 25px;">
nameparamspre-trained dataresourcesdownload
SuPreM4.70M2100 CTours :star2:weights
</div> </details>

Examples of predicting organ masks on unseen CT volumes using our SuPreM: README

Examples of fine-tuning our SuPreM on other downstream medical tasks are provided in this repository.

taskdatasetdocument
organ, muscle, vertebrae, cardiac, rib segmentationTotalSegmentatorREADME
pancreas tumor detectionJHHREADME

If you want to re-pre-train SuPreM on AbdomenAtlas 1.1 (not recommended), please refer to our instruction

Estimated cost:

★ Or simply make a request here: https://github.com/MrGiovanni/SuPreM/issues/1

Citation

@article{li2024abdomenatlas,
  title={AbdomenAtlas: A large-scale, detailed-annotated, \& multi-center dataset for efficient transfer learning and open algorithmic benchmarking},
  author={Li, Wenxuan and Qu, Chongyu and Chen, Xiaoxi and Bassi, Pedro RAS and Shi, Yijia and Lai, Yuxiang and Yu, Qian and Xue, Huimin and Chen, Yixiong and Lin, Xiaorui and others},
  journal={Medical Image Analysis},
  pages={103285},
  year={2024},
  publisher={Elsevier},
  url={https://github.com/MrGiovanni/AbdomenAtlas}
}

@inproceedings{li2024well,
  title={How Well Do Supervised Models Transfer to 3D Image Segmentation?},
  author={Li, Wenxuan and Yuille, Alan and Zhou, Zongwei},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024}
}

@article{qu2023abdomenatlas,
  title={Abdomenatlas-8k: Annotating 8,000 CT volumes for multi-organ segmentation in three weeks},
  author={Qu, Chongyu and Zhang, Tiezheng and Qiao, Hualin and Tang, Yucheng and Yuille, Alan L and Zhou, Zongwei and others},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2023}
}

Acknowledgement

This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the McGovern Foundation. The codebase is modified from NVIDIA MONAI. Paper content is covered by patents pending.