Home

Awesome

Source code of VSRC

Introduction

The source code includes training and inference procedures for a semi-supervised medical image segmentation method with Voxel Stability and Reliability Constraints (VSRC).

Semi-supervised learning is becoming an effective solution in medical image segmentation because annotations are costly and tedious to acquire. Methods based on the teacher-student model use consistency regularization and uncertainty estimation and have shown good potential in dealing with limited annotated data. Nevertheless, the existing teacher-student model is seriously limited by the exponential moving average algorithm, which leads to the optimization trap. Moreover, the classic uncertainty estimation method calculates the global uncertainty for images but does not consider local region-level uncertainty, which is unsuitable for medical images with blurry regions. Here, the Voxel Stability and Reliability Constraint (VSRC) model is proposed to address these issues. Specifically, the Voxel Stability Constraint (VSC) strategy is introduced to optimize parameters and exchange effective knowledge between two independent initialized models, which can break through the performance bottleneck and avoid model collapse. Moreover, a new uncertainty estimation strategy, the Voxel Reliability Constraint (VRC), is proposed for use in our semi-supervised model to consider the uncertainty at the local region level. We further extend our model to auxiliary tasks and propose a task-level consistency regularization with uncertainty estimation. Extensive experiments on two 3D medical image datasets demonstrate that our method outperforms other state-of-the-art semi-supervised medical image segmentation methods under limited supervision.

This method has been submitted to the IEEE Journal of Biomedical and Health Informatics with title "Semi-Supervised Medical Image Segmentation with Voxel Stability and Reliability Constraints" (JBHI-03096-2022).

Requirements

Note: It is recommended to install Python and the necessary environment via Anaconda.

Directory structure

Usage

Step 1. Create a virtual environment and activate it in Anaconda

conda create -n vsrc python=3.7 -y
conda activate vsrc

Step 2. Install PyTorch and torchvision following the official instructions

conda install pytorch=1.7 torchvision cudatoolkit=10.0 -c pytorch

Step 3. Install other dependencies

pip install -r requirements.txt

Step 4. Prepare datasets

    datasets/
        ├── LA2018
            ├── train
            │    ├── IMAGE_ID_1
            │    │      └── la_mri.h5
            │    ├── IMAGE_ID_2
            │    │      └── la_mri.h5
            │    └── ...
            └── val
                 ├── IMAGE_ID_3
                 │      └── la_mri.h5
                 ├── IMAGE_ID_4
                 │      └── la_mri.h5
                 └── ...

Step 5. Train and test by running train_vsrc.py

python train_vsrc.py -d <DATASET_NAME> -p <PATCH_SIZE> --train
python train_vsrc.py -d <DATASET_NAME> -p <PATCH_SIZE> --test --save_viz -tp <TEST_CKPT_PATH> 

DATASET_NAME is a string that specifies what dataset should be trained on, e.g. LA2018. PATCH_SIZE is used to specify the patch size used for data training, e.g. 112 112 80 which means the patch size $112\times 112\times 80$. TEST_CKPT_PATH is used to specify the path where the model will be loaded during the test phase.

For example, you can run following scripts to train and test VSRC on the LA2018 dataset:

python train_vsrc.py -d LA2018 -p 112 112 80 --train
python train_vsrc.py -d LA2018 -p 112 112 80 --test -save_viz -tp ./works/DualModel/test_checkpoint/sdf_VNet_LA2018_20.pth 

Our pre-trained models for the LA2018 dataset under 20% and 10% supervision are provided in the model directory works/DualModel/test_checkpoint.

Acknowledgement