Awesome
UniSeg-code
This is the official pytorch implementation of our MICCAI 2023 paper "UniSeg: A Prompt-driven Universal Segmentation Model as well as A Strong Representation Learner". In this paper, we propose a Prompt-Driven Universal Segmentation model (UniSeg) to segment multiple organs, tumors, and vertebrae on 3D medical images with diverse modalities and domains.
<div align="center"> <img width="100%" alt="UniSeg illustration" src="github/Overview.png"> </div>News
- 2023.07.17: We have updated the code to better support the new multi-task segmentation. You just need to modify the
self.task
,self.task_class
, andself.total_task_num
in the UniSeg_Trainer. - 2023.07.19: We have provided the configuration file for predicting new data. In addition, we have updated the new data prediction code to restrict the output categories for specified tasks.
- 2023.10.13: 🎉🎉🎉Our UniSeg achieved second place on both tasks of MICCAI SegRap 2023 with simply fine-tuning on the dataset.
Requirements
CUDA 11.5<br /> Python 3.8<br /> Pytorch 1.11.0<br /> CuDNN 8.3.2.44
Usage
Installation
- Clone this repo.
git clone https://github.com/yeerwen/UniSeg.git
cd UniSeg
Data Preparation
- Download MOTS dataset.
- Download VerSe20 dataset.
- Download Prostate dataset.
- Download BraTS21 dataset.
- Download AutoPET2022 dataset.
Pre-processing
-
Step 1:
- Install nnunet by
pip install nnunet
. - Set path, for example:
export nnUNet_raw_data_base="/data/userdisk0/ywye/nnUNet_raw"
export nnUNet_preprocessed="/erwen_SSD/1T/nnUNet_preprocessed"
export RESULTS_FOLDER="/data/userdisk0/ywye/nnUNet_trained_models"
- Install nnunet by
-
Step 2:
cd Upstream
- Note that the output paths of the preprocessed datasets should be in the
$nnUNet_raw_data_base/nnUNet_raw_data/
directory. - Run
python prepare_Kidney_Dataset.py
to normalize the name of the volumes for the Kidney dataset. - Run
python Convert_MOTS_to_nnUNet_dataset.py
to pre-process the MOTS dataset. - Run
python Convert_VerSe20_to_nnUNet_dataset.py
to pre-process the VerSe20 dataset and generatesplits_final.pkl
. - Run
python Convert_Prostate_to_nnUNet_dataset.py
to pre-process the Prostate dataset and generatesplits_final.pkl
. - Run
python Convert_BraTS21_to_nnUNet_dataset.py
to pre-process the BraTS21 dataset and generatesplits_final.pkl
. - Run
python Convert_AutoPET_to_nnUNet_dataset.py
to pre-process the AutoPET2022 dataset and generatesplits_final.pkl
.
-
Step 3:
- Copy
Upstream/nnunet
to replacennunet
, which is installed bypip install nnunet
(the address is usually 'anaconda3/envs/your envs/lib/python3.8/site-packages/nnunet'). - Run
nnUNet_plan_and_preprocess -t 91 --verify_dataset_integrity --planner3d MOTSPlanner3D
. - Run
nnUNet_plan_and_preprocess -t 37 --verify_dataset_integrity --planner3d VerSe20Planner3D
. - Run
nnUNet_plan_and_preprocess -t 20 --verify_dataset_integrity --planner3d ProstatePlanner3D
. - Run
nnUNet_plan_and_preprocess -t 21 --verify_dataset_integrity --planner3d BraTS21Planner3D
. - Run
nnUNet_plan_and_preprocess -t 11 --verify_dataset_integrity --planner3d AutoPETPlanner3D
. - Move
splits_final.pkl
of each dataset to the address of its pre-processed dataset. For example, '***/nnUNet_preprocessed/Task091_MOTS/splits_final.pkl'. Note that, to follow DoDNet, we providesplits_final.pkl
of the MOTS dataset inUpstream/MOTS_data_split/splits_final.pkl
. - Run
python merge_each_sub_dataet.py
to form a new dataset. - To make sure that we use the same data split, we provide the final data split in
Upstream/splits_final_11_tasks.pkl
.
- Copy
Training and Test
- Move
Upstream/run_ssl.sh
andUpstream/UniSeg_Metrics_test.py
to"***/nnUNet_trained_models/"
. - cd
***/nnUNet_trained_models/
. - Run
sh run_ssl.sh
for training (GPU Memory Cost: ~10GB, Time Cost: ~210s each epoch).
Pretrained weights
- Upstream trained model is available in UniSeg_11_Tasks.
- The plans.pkl file.
Downstream Tasks
cd Downstream
- Download BTCV dataset.
- Download VS dataset.
- Run
python Convert_BTCV_to_nnUNet_dataset.py
to pre-process the BTCV dataset and generatesplits_final.pkl
. - Run
python Convert_VSseg_to_nnUNet_dataset.py
to pre-process the VS dataset and generatesplits_final.pkl
. - Update the address of the pre-trained model in the 'Downstream/nnunet/training/network_training/UniSeg_Trainer_DS.py' file (line 97)
- Copy
Downstream/nnunet
to replacennunet
, which is installed bypip install nnunet
(the address is usually 'anaconda3/envs/your envs/lib/python3.8/site-packages/nnunet'). - Run
nnUNet_plan_and_preprocess -t 60 --verify_dataset_integrity
. - Run
nnUNet_plan_and_preprocess -t 61 --verify_dataset_integrity
. - Move
splits_final.pkl
of two datasets to the addresses of their pre-processed datasets. - To make sure that we use the same data split for the downstream datasets, we provide the final data splits in
Downstream/splits_final_BTCV.pkl
andDownstream/splits_final_VS.pkl
. - Training and Test:
- For the BTCV dataset:
CUDA_VISIBLE_DEVICES=0 nnUNet_n_proc_DA=32 nnUNet_train 3d_fullres UniSeg_Trainer_DS 60 0
- For the VS dataset:
CUDA_VISIBLE_DEVICES=0 nnUNet_n_proc_DA=32 nnUNet_train 3d_fullres UniSeg_Trainer_DS 61 0
- For the BTCV dataset:
Prediction on New Data
- Download the Upstream trained model and configuration file.
- Move them to
./nnUNet_trained_models/UniSeg_Trainer/3d_fullres/Task097_11task/UniSeg_Trainer__DoDNetPlans/fold_0/
and rename them tomodel_final_checkpoint.model
andmodel_final_checkpoint.model.pkl
, respectively. cd Upstream
- Copy
Upstream/nnunet
to replacennunet
, which is installed bypip install nnunet
- Run
CUDA_VISIBLE_DEVICES=1 nnUNet_n_proc_DA=32 nnUNet_predict -i /data/userdisk0/ywye/nnUNet_raw/nnUNet_raw_data/Test/Image/ -o /data/userdisk0/ywye/nnUNet_raw/nnUNet_raw_data/Test/Predict/10/ -t 97 -m 3d_fullres -tr UniSeg_Trainer -f 0 -task_id 7 -exp_name UniSeg_Trainer -num_image 1 -modality CT -spacing 3.0,1.5,1.5
-i
: Path of the input image(s), name format of the input image: name_0000.nii.gz (name_0001.nii.gz)-o
: Path of the output mask(s)-task_id
Selected segmentation task.-1
means predicting all segmentation tasks under a specific modality.- 0: "liver and liver tumor segmentation"
- 1: "kidney and kidney tumor segmentation"
- 2: "hepatic vessel and hepatic tumor segmentation"
- 3: "pancreas and pancreas tumor segmentation"
- 4: "colon tumor segmentation"
- 5: "lung tumor segmentation"
- 6: "spleen segmentation"
- 7: "vertebrae segmentation"
- 8: "prostate segmentation"
- "9": "brain tumors: edema, non-enhancing, and enhancing segmentation"
- "10": "whole-body tumors segmentation"
-num_image
: Channel number of the input image(s)-modality
: "CT" or "MR" (prostate) or "MR,MR,MR,MR" (brain tumors) or "CT,PET" (whole-body tumors)-spacing
: Spacing of resampled image(s)
To do
- Dataset Links
- Pre-processing Code
- Upstream Code Release
- Upstream Trained Model
- Downstream Code Release
- Inference of Upstream Trained Model on New Data
Citation
If this code is helpful for your study, please cite:
@article{ye2023uniseg,
title={UniSeg: A Prompt-driven Universal Segmentation Model as well as A Strong Representation Learner},
author={Yiwen Ye, Yutong Xie, Jianpeng Zhang, Ziyang Chen, and Yong Xia},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={508--518},
year={2023},
organization={Springer}
}
Acknowledgements
The whole framework is based on nnUNet v1.
Contact
Yiwen Ye (ywye@mail.nwpu.edu.cn)