Awesome
News
- 🔥 The pseudo-label with manual refinement could be found in AbdonmenAtlas 1.0
- 🔥 We collect recent medical universal models in AWESOME MEDICAL UNIVERSAL MODEL .
- 😎 We have document for common questions for code and common questions for paper.
CLIP-Driven Universal Model
<img src="teaser_fig.png" width = "480" height = "345" alt="" align=center />Paper
This repository provides the official implementation of Universal Model.
<b>CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection</b> <br/> ${\color{red} {\textbf{Rank First in Medical Segmentation Decathlon (MSD) Competition}}}$ (see leaderboard) <br/> Jie Liu<sup>1</sup>, Yixiao Zhang<sup>2</sup>, Jie-Neng Chen<sup>2</sup>, Junfei Xiao<sup>2</sup>, Yongyi Lu<sup>2</sup>, <br/> Yixuan Yuan<sup>1</sup>, Alan Yuille<sup>2</sup>, Yucheng Tang<sup>3</sup>, Zongwei Zhou<sup>2</sup> <br/> <sup>1 </sup>City University of Hong Kong, <sup>2 </sup>Johns Hopkins University, <sup>3 </sup>NVIDIA <br/> ICCV, 2023 <br/> paper | code | slides | poster | talk | blog
<b>Large Language-Image Model for Multi-Organ Segmentation and Cancer Detection from Computed Tomography</b> <br/> Jie Liu<sup>1</sup>, Yixiao Zhang<sup>2</sup>, Jie-Neng Chen<sup>2</sup>, Junfei Xiao<sup>2</sup>, Yongyi Lu<sup>2</sup>, <br/> Yixuan Yuan<sup>1</sup>, Alan Yuille<sup>2</sup>, Yucheng Tang<sup>3</sup>, Zongwei Zhou<sup>2</sup> <br/> <sup>1 </sup>City University of Hong Kong, <sup>2 </sup>Johns Hopkins University, <sup>3 </sup>NVIDIA <br/> RSNA, 2023 <br/> abstract | code | slides
Model
Architecture | Param | Download |
---|---|---|
U-Net | 19.08M | link |
Swin UNETR | 62.19M | link |
Dataset
- 01 Multi-Atlas Labeling Beyond the Cranial Vault - Workshop and Challenge (BTCV)
- 02 Pancreas-CT TCIA ## The label we used for Dataset 01 and 02 is in here.
- 03 Combined Healthy Abdominal Organ Segmentation (CHAOS)
- 04 Liver Tumor Segmentation Challenge (LiTS)
- 05 Kidney and Kidney Tumor Segmentation (KiTS)
- 06 Liver segmentation (3D-IRCADb)
- 07 WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image
- 08 AbdomenCT-1K
- 09 Multi-Modality Abdominal Multi-Organ Segmentation Challenge (AMOS)
- 10 Decathlon (Liver, Lung, Pancreas, HepaticVessel, Spleen, Colon
- 11 CT volumes with multiple organ segmentations (CT-ORG)
- 12 AbdomenCT 12organ
The post_label can be downloaded via link.
Direct Inference in Your OWN CT scans
- Put your all CT scans with nii.gz prefix in one directory. For example,
/home/data/ct/
. - Run following code.
conda create -n universalmodel python=3.7
conda activate universalmodel
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
## please modify according to the CUDA version in your server
pip install 'monai[all]'
pip install -r requirements.txt
cd pretrained_weights/
wget https://huggingface.co/ljwztc/CLIP-Driven-Universal-Model/resolve/main/clip_driven_universal_swin_unetr.pth?download=true
cd ../
python pred_pseudo.py --data_root_path PATH_TO_IMG_DIR --result_save_path PATH_TO_result_DIR --resume ./pretrained_weights/clip_driven_universal_swin_unetr.pth
## For example: python pred_pseudo.py --data_root_path /home/data/ct/ --result_save_path /home/data/result --resume ./pretrained_weights/clip_driven_universal_swin_unetr.pth
0. Preliminary
python3 -m venv universal
source /data/zzhou82/environments/universal/bin/activate
git clone https://github.com/ljwztc/CLIP-Driven-Universal-Model.git
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install 'monai[all]'
pip install -r requirements.txt
cd pretrained_weights/
wget https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt
wget https://www.dropbox.com/s/lh5kuyjxwjsxjpl/Genesis_Chest_CT.pt
cd ../
Dataset Pre-Process
- Download the dataset according to the dataset link and arrange the dataset according to the
dataset/dataset_list/PAOT.txt
. - Modify ORGAN_DATASET_DIR and NUM_WORKER in label_transfer.py
python -W ignore label_transfer.py
Current Template
Index | Organ | Index | Organ |
---|---|---|---|
1 | Spleen | 17 | Left Lung |
2 | Right Kidney | 18 | Colon |
3 | Left Kidney | 19 | Intestine |
4 | Gall Bladder | 20 | Rectum |
5 | Esophagus | 21 | Bladder |
6 | Liver | 22 | Prostate |
7 | Stomach | 23 | Left Head of Femur |
8 | Aorta | 24 | Right Head of Femur |
9 | Postcava | 25 | Celiac Trunk |
10 | Portal Vein and Splenic Vein | 26 | Kidney Tumor |
11 | Pancreas | 27 | Liver Tumor |
12 | Right Adrenal Gland | 28 | Pancreas Tumor |
13 | Left Adrenal Gland | 29 | Hepatic Vessel Tumor |
14 | Duodenum | 30 | Lung Tumor |
15 | Hepatic Vessel | 31 | Colon Tumor |
16 | Right Lung | 32 | Kidney Cyst |
How expand to new dataset with new organ?
- Set the following index for new organ. (e.g. 33 for vermiform appendix)
- Check if there are any organs that are not divided into left and right in the dataset. (e.g. kidney, lung, etc.) The
RL_Splitd
inlabel_transfer.py
is used to processed this case. - Set up a new transfer list for new dataset in TEMPLATE (line 58 in label_transfer.py). (If a new dataset with Intestine labeled as 1 and vermiform appendix labeled as 2, we set the transfer list as [19, 33])
- Run the program
label_transfer.py
to get new post-processing labels.
More details please take a look at common questions
1. Training
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -W ignore -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 train.py --dist True --data_root_path /mnt/zzhou82/PublicAbdominalData/ --num_workers 12 --num_samples 4 --cache_dataset --cache_rate 0.6 --uniform_sample
2. Validation
CUDA_VISIBLE_DEVICES=0 python -W ignore validation.py --data_root_path /mnt/zzhou82/PublicAbdominalData/ --start_epoch 10 --end_epoch 40 --epoch_interval 10 --cache_dataset --cache_rate 0.6
3. Evaluation
CUDA_VISIBLE_DEVICES=0 python -W ignore test.py --resume ./out/epoch_61.pth --data_root_path /mnt/zzhou82/PublicAbdominalData/ --store_result --cache_dataset --cache_rate 0.6
Todo
- Code release
- Dataset link
- Support different backbones (SwinUNETR, Unet, DiNTS, Unet++)
- Model release
- Pesudo label release
- Tutorials for Inference
Acknowledgement
A lot of code is modified from . This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and partially by the Patrick J. McGovern Foundation Award. We appreciate the effort of the MONAI Team to provide open-source code for the community.
Citation
If you find this repository useful, please consider citing this paper:
@article{liu2023clip,
title={CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection},
author={Liu, Jie and Zhang, Yixiao and Chen, Jie-Neng and Xiao, Junfei and Lu, Yongyi and Landman, Bennett A and Yuan, Yixuan and Yuille, Alan and Tang, Yucheng and Zhou, Zongwei},
journal={arXiv preprint arXiv:2301.00785},
year={2023}
}