Home

Awesome

Decomposition-Based Variational Network for Multi-Contrast MRI Super-Resolution and Reconstruction (ICCV2023)

Authors: Pengcheng Lei, Faming Fang, Guixu Zhang and Tieyong Zeng

Abstract

Multi-contrast MRI super-resolution (SR) and reconstruction methods aim to explore complementary information from the reference image to help the reconstruction of the target image. Existing deep learning-based methods usually manually design fusion rules to aggregate the multi-contrast images, fail to model their correlations accurately and lack certain interpretations. Against these issues, we propose a multi-contrast variational network (MC-VarNet) to explicitly model the relationship of multi-contrast images. Our model is constructed based on an intuitive motivation that multi-contrast images have consistent (edges and structures) and inconsistent (contrast) information. We thus build a model to reconstruct the target image and decompose the reference image as a common component and a unique component. In the feature interaction phase, only the common component is transferred to the target image. We solve the variational model and unfold the iterative solutions into a deep network. Hence, the proposed method combines the good interpretability of model-based methods with the powerful representation ability of deep learning-based methods. Experimental results on the multi-contrast MRI reconstruction and SR demonstrate the effectiveness of the proposed model. Especially, since we explicitly model the multi-contrast images, our model is more robust to the reference images with noises and large inconsistent structures.

Environment

pytorch version >= 1.8

1. Parparing the datasets:

The two publicly available multi-modal MR image datasets IXI and BraTS2018 can be downloaded at: [IXI dataset] and [BrainTS dataset].
(1) The original data are .nii data. Split your data set into training sets, validation sets, and test sets;
(2) Read .nii data and save these slices as .png images into two different folders as:

python data/read_nii_to_img.py

[T1 folder:]
000001.png,  000002.png,  000003.png,  000003.png ...
[T2 folder:]
000001.png,  000002.png,  000003.png,  000003.png ...
# Note that the images in the T1 and T2 folders correspond one to one.

(3) For reconstruction, the random undersampling masks can be generated by [data/generate_mask_random.py]. Undersampled images are automatically generated in dataloader.

(4) For SR, there are two ways to get your LR input. 1、 Using the center cropping mask. The generation code of the center cropping mask can be fund in [data/generate_mask_random.py]. 2、Directly crop the K-space data, the corresponding code can be also fund in [data/generate_mask_random.py]

2. Model training:

Modify the data set path and training parameters in [configs/modelx4.yaml], then run

sh train.sh

3. Model test:

Modify the test configurations in Python file [test_psnr.py]. Then run:

CUDA_VISIBLE_DEVICES=0 python test_PSNR.py

Acknowledgement

Our code is built based on BasicSR, thank them for releasing their code!