Home

Awesome

HGGDP

Paper: HGGDP: Homotopic Gradients of Generative Density Priors for MR Image Reconstruction

Authors: Cong Quan, Jinjie Zhou, Yuanzheng Zhu, Yang Chen, Shanshan Wang, Dong Liang*, Qiegen Liu*

IEEE Transactions on Medical Imaging, https://ieeexplore.ieee.org/abstract/document/9435335

Date : May-22-2021
Version : 1.0
The code and the algorithm are for non-comercial use only.
Copyright 2020, Department of Electronic Information Engineering, Nanchang University.

Deep learning, particularly generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. In this work, by taking advantage of the denoising score matching, deep gradients of generative density priors (HGGDP) are proposed for MRI reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming higher-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results imply the remarkable performance of HGGDP in terms of high reconstruction accuracy; only 10% of the k-space data can still generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.

Training

python3 separate_siat.py --exe SIAT_TRAIN --config siat_config.yml --checkpoint your save path

Test

python3 separate_siat.py --exe SIAT_MULTICHANNEL --config siat_config.yml --model hggdp --test

Compare_MoDL

python3 separate_siat.py --exe SIAT_MULTICHANNEL_MODL --config siat_config.yml --model hggdp --test

Compare_DDP

python3 separate_siat.py --exe SIAT_MULTICHANNEL_DDP --config siat_config.yml --model hggdp --test

In order to verify the fairness of the experiment, in the MoDL experiment comparison, we chose the test data, coil sensitivity maps and undersampling mask shared by Aggarwal et.al. Orignal MoDL available code<font size=5>[Code]</font>
In the DDP experiment comparison, we chose the test data, coil sensitivity maps, undersampleing patterns and undersampling mask shared by Tezcan et.al. Orignal DDP available code<font size=5>[Code]</font>

Graphical representation

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig6.png" width = "400" height = "450"> </div>

Performance exhibition of “multi-view noise” strategy. (a) Training sliced score matching (SSM) loss and validation loss for each iteration. (b) Image quality comparison on the brain dataset at 15% radial sampling: Reconstruction images, error maps (Red) and zoom-in results (Green).

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig7.png"> </div>

Pipeline of sampling from the high-dimensional noisy data distribution with multi-view noise and intermediate samples. (a) Conceptual dia-gram of the sampling on high-dimensional noisy data distribution with multi-view noise. (b) Intermediate samples of annealed Langevin dynamics.

Reconstruction Results by Various Methods at 85% 2D Random Undersampling.

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig11.png"> </div>

Reconstruction comparison on pseudo radial sampling at acceleration factor 6.7 . Top: Reference, reconstruction by DLMRI, PANO, FDLCP; Bottom: Reconstruction by NLR-CS, DC-CNN, EDAEPRec, HGGDPRec. Green and red boxes illustrate the zoom in results and error maps, respectively.

Reconstruction Results by Various Methods at various 1D Cartesian undersampling percentages.

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/compare_DDP.PNG"> </div>

Complex-valued reconstruction results on brain image at various 1D Cartesian undersampling percentages (R=2, 3). From left to right: Ground-truth, various 1D Cartesian undersampling masks, reconstruction by Zero-Filled, DDP and HGGDPRec.

Reconstruction Results by Various Methods at 6-fold 2D Random undersampling mask.

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/Compare_MoDL.png"> </div>

Complex-valued reconstruction results on brain image at 16.7% 2D random sampling. From left to right: Ground-truth, 6-fold 2D random undersample mask, reconstruction by Zero-Filled, MoDL and HGGDPRec.

Table

<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/table1.png"> </div> RECONSTRUCTION PSNR, SSIM AND HFEN VALUES OF THREE TEST IMAGES AT VARIOUS SAMPLING TRAJECTORIES AND UNDERSAMPLING PER-CENTAGES.

Checkpoints

We provide pretrained checkpoints. You can download pretrained models from Baidu Drive. key number is "awn0"

Test Data

In file './test_data_31', 31 complex-valued MRI data with size of 256x256 were acquired by using a 3D fast-spin-echo (FSE) sequence with T2 weighting from a 3.0T whole body MR system (SIEMENS MAGNETOM TrioTim).

Other Related Projects

<div align="center"><img src="https://github.com/yqx7150/HKGM/blob/main/PPT/All-MRI.png" > </div>