Awesome
HGGDP
Paper: HGGDP: Homotopic Gradients of Generative Density Priors for MR Image Reconstruction
Authors: Cong Quan, Jinjie Zhou, Yuanzheng Zhu, Yang Chen, Shanshan Wang, Dong Liang*, Qiegen Liu*
IEEE Transactions on Medical Imaging, https://ieeexplore.ieee.org/abstract/document/9435335
Date : May-22-2021
Version : 1.0
The code and the algorithm are for non-comercial use only.
Copyright 2020, Department of Electronic Information Engineering, Nanchang University.
Deep learning, particularly generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. In this work, by taking advantage of the denoising score matching, deep gradients of generative density priors (HGGDP) are proposed for MRI reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming higher-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results imply the remarkable performance of HGGDP in terms of high reconstruction accuracy; only 10% of the k-space data can still generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.
Training
python3 separate_siat.py --exe SIAT_TRAIN --config siat_config.yml --checkpoint your save path
Test
python3 separate_siat.py --exe SIAT_MULTICHANNEL --config siat_config.yml --model hggdp --test
Compare_MoDL
python3 separate_siat.py --exe SIAT_MULTICHANNEL_MODL --config siat_config.yml --model hggdp --test
Compare_DDP
python3 separate_siat.py --exe SIAT_MULTICHANNEL_DDP --config siat_config.yml --model hggdp --test
In order to verify the fairness of the experiment, in the MoDL experiment comparison, we chose the test data, coil sensitivity maps and undersampling mask shared by Aggarwal et.al. Orignal MoDL available code<font size=5>[Code]</font>
In the DDP experiment comparison, we chose the test data, coil sensitivity maps, undersampleing patterns and undersampling mask shared by Tezcan et.al. Orignal DDP available code<font size=5>[Code]</font>
Graphical representation
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig6.png" width = "400" height = "450"> </div>Performance exhibition of “multi-view noise” strategy. (a) Training sliced score matching (SSM) loss and validation loss for each iteration. (b) Image quality comparison on the brain dataset at 15% radial sampling: Reconstruction images, error maps (Red) and zoom-in results (Green).
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig7.png"> </div>Pipeline of sampling from the high-dimensional noisy data distribution with multi-view noise and intermediate samples. (a) Conceptual dia-gram of the sampling on high-dimensional noisy data distribution with multi-view noise. (b) Intermediate samples of annealed Langevin dynamics.
Reconstruction Results by Various Methods at 85% 2D Random Undersampling.
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/fig11.png"> </div>Reconstruction comparison on pseudo radial sampling at acceleration factor 6.7 . Top: Reference, reconstruction by DLMRI, PANO, FDLCP; Bottom: Reconstruction by NLR-CS, DC-CNN, EDAEPRec, HGGDPRec. Green and red boxes illustrate the zoom in results and error maps, respectively.
Reconstruction Results by Various Methods at various 1D Cartesian undersampling percentages.
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/compare_DDP.PNG"> </div>Complex-valued reconstruction results on brain image at various 1D Cartesian undersampling percentages (R=2, 3). From left to right: Ground-truth, various 1D Cartesian undersampling masks, reconstruction by Zero-Filled, DDP and HGGDPRec.
Reconstruction Results by Various Methods at 6-fold 2D Random undersampling mask.
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/Compare_MoDL.png"> </div>Complex-valued reconstruction results on brain image at 16.7% 2D random sampling. From left to right: Ground-truth, 6-fold 2D random undersample mask, reconstruction by Zero-Filled, MoDL and HGGDPRec.
Table
<div align="center"><img src="https://github.com/yqx7150/HGGDP/blob/master/hggdp_rec/sample/table1.png"> </div> RECONSTRUCTION PSNR, SSIM AND HFEN VALUES OF THREE TEST IMAGES AT VARIOUS SAMPLING TRAJECTORIES AND UNDERSAMPLING PER-CENTAGES.Checkpoints
We provide pretrained checkpoints. You can download pretrained models from Baidu Drive. key number is "awn0"
Test Data
In file './test_data_31', 31 complex-valued MRI data with size of 256x256 were acquired by using a 3D fast-spin-echo (FSE) sequence with T2 weighting from a 3.0T whole body MR system (SIEMENS MAGNETOM TrioTim).
Other Related Projects
-
Multi-Channel and Multi-Model-Based Autoencoding Prior for Grayscale Image Restoration
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[Slide]</font> <font size=5>[数学图像联盟会议交流PPT]</font> -
Highly Undersampled Magnetic Resonance Imaging Reconstruction using Autoencoding Priors
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[Slide]</font> <font size=5>[数学图像联盟会议交流PPT]</font> -
High-dimensional Embedding Network Derived Prior for Compressive Sensing MRI Reconstruction
<font size=5>[Paper]</font> <font size=5>[Code]</font> -
Denoising Auto-encoding Priors in Undecimated Wavelet Domain for MR Image Reconstruction
<font size=5>[Paper]</font> <font size=5>[Paper]</font> <font size=5>[Code]</font> -
Complex-valued MRI data from SIAT--test31 <font size=5>[Data]</font>
-
More explanations with regard to the MoDL test datasets, we use some data from the test dataset in "dataset.hdf5" file, where the image slice numbers are 40,48,56,64,72,80,88,96,104,112(https://drive.google.com/file/d/1qp-l9kJbRfQU1W5wCjOQZi7I3T6jwA37/view)
-
DDP Method Link <font size=5>[DDP Code]</font>
-
MoDL Method Link <font size=5>[MoDL code]</font>
-
Complex-valued MRI data from SIAT--SIAT_MRIdata200 <font size=5>[Data]</font>
-
Complex-valued MRI data from SIAT--SIAT_MRIdata500-singlecoil <font size=5>[Data]</font>
-
Complex-valued MRI data from SIAT--SIAT_MRIdata500-12coils <font size=5>[Data]</font>
-
Learning Multi-Denoising Autoencoding Priors for Image Super-Resolution
<font size=5>[Paper]</font> <font size=5>[Code]</font> -
REDAEP: Robust and Enhanced Denoising Autoencoding Prior for Sparse-View CT Reconstruction
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font> <font size=5>[数学图像联盟会议交流PPT]</font> -
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font> -
Universal Generative Modeling for Calibration-free Parallel MR Imaging
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[Poster]</font> -
Progressive Colorization via Interative Generative Models
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font> <font size=5>[数学图像联盟会议交流PPT]</font> -
Joint Intensity-Gradient Guided Generative Modeling for Colorization <font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font> <font size=5>[数学图像联盟会议交流PPT]</font>
-
Diffusion Models for Medical Imaging <font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font>
-
One-shot Generative Prior in Hankel-k-space for Parallel Imaging Reconstruction
<font size=5>[Paper]</font> <font size=5>[Code]</font> <font size=5>[PPT]</font> -
Lens-less imaging via score-based generative model (基于分数匹配生成模型的无透镜成像方法) <font size=5>[Paper]</font> <font size=5>[Code]</font>