Home

Awesome

HUMUS-Net

PWC
This is the PyTorch implementation of the NeurIPS 2022 paper HUMUS-Net, a Transformer-convolutional Hybrid Unrolled Multi-Scale Network architecture for accelerated MRI reconstruction.

HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction,
Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi
NeurIPS 2022

Reproducible results on the fastMRI multi-coil knee test dataset with x8 acceleration:

MethodSSIMNMSEPSNR
HUMUS-Net (ours)0.89450.008137.3
E2E-VarNet0.89200.008537.1
XPDNet0.88930.008337.2
Sigma-Net0.88770.009136.7
i-RIM0.88750.009136.7

Pre-trained HUMUS-Net models for the fastMRI Public Leaderboard submissions can be found below.

This repository contains code to train and evaluate HUMUS-Net model on the fastMRI knee, Stanford 2D FSE and Stanford Fullysampled 3D FSE Knees datasets.

Requirements

CUDA-enabled GPU is necessary to run the code. We tested this code using:

Installation

First, install PyTorch 1.10.1 with CUDA support following the instructions here. Then, to install the necessary packages run

git clone https://github.com/z-fabian/HUMUS-Net
cd HUMUS-Net
pip3 install wheel
pip3 install -r requirements.txt
pip3 install pytorch-lightning==1.3.3

Datasets

fastMRI

FastMRI is an open dataset, however you need to apply for access at https://fastmri.med.nyu.edu/. To run the experiments from our paper, you need the download the fastMRI knee dataset with the following files:

After downloading these files, extract them into the same directory. Make sure that the directory contains exactly the following folders:

Stanford datasets

Please follow these instructions to batch-download the Stanford datasets. Alternatively, they can be downloaded from http://mridata.org volume-by-volume at the following links:

After downloading the .h5 files the dataset has to be converted to a format compatible with fastMRI modules. To create the datasets used in the paper please follow the instructions here.

Training

fastMRI knee

To train HUMUS-Net on the fastMRI knee dataset, run the following in the terminal:

python3 humus_examples/train_humus_fastmri.py \
--config_file PATH_TO_CONFIG \
--data_path DATA_ROOT \
--default_root_dir LOG_DIR \
--gpus NUM_GPUS

Stanford datasets

Similarly, to train on either of the Stanford datasets, run

python3 humus_examples/train_humus_stanford.py \
--config_file PATH_TO_CONFIG \
--data_path DATA_ROOT \
--default_root_dir LOG_DIR \
--train_val_seed SEED \
--gpus NUM_GPUS

In this case DATA_ROOT should point directly to the folder containing the converted .h5 files. SEED is used to generate the training-validation split (0, 1, 2 in our experiments).

Note: Each GPU is assigned whole volumes of MRI data for validation. Therefore the number of GPUs used for training/evaluation cannot be larger than the number of MRI volumes in the validation dataset. We recommend using 4 or less GPUs when training on the Stanford 3D FSE dataset.

Pre-trained models

Here you can find checkpoint files for the models submitted to the fastMRI Public Leaderboard. See the next section how to load/evaluate models from the checkpoint files.

DatasetModelTrained onAccelerationCheckpoint sizeLink
fastMRI Kneedefaulttrainx81.4GDownload
fastMRI Kneedefaulttrain+valx81.4GDownload
fastMRI Kneelargetrainx82.8GDownload
fastMRI Kneelargetrain+valx82.8GDownload
Stanford 2D FSEdefaultseed 0 splitx81.4GDownload
Stanford 3D FSE Kneesdefaultseed 0 splitx81.4GDownload

Evaluating models

fastMRI knee

To evaluate a model trained on fastMRI knee data on the validation dataset, run

python3 humus_examples/eval_humus_fastmri.py \
--checkpoint_file CHECKPOINT \
--data_path DATA_DIR

Note: by default, the model will be evaluated on 8x acceleration.

Stanford datasets

To evaluate on one of the Stanford datasets run

python3 humus_examples/eval_humus_stanford.py \
--checkpoint_file CHECKPOINT \
--data_path DATA_DIR \
--gpus NUM_GPUS \
--train_val_split TV_SPLIT \
--train_val_seed TV_SEED

Custom training

To experiment with different network settings see all available training options by running

python3 humus_examples/train_humus_fastmri.py --help

Alternatively, the .yaml files in humus_examples/experiments can be customized and used as config files as described before.

License

HUMUS-Net is MIT licensed, as seen in the LICENSE file.

Citation

If you find our paper useful, please cite

@article{fabian2022humus,
  title={{HUMUS-Net}: Hybrid Unrolled Multi-scale Network Architecture for Accelerated {MRI} Reconstruction},
  author={Fabian, Zalan and Tinaz, Berk and Soltanolkotabi, Mahdi},
  journal={Advances in Neural Information Processing Systems},
  volume={35},
  pages={25306--25319},
  year={2022}
}

Acknowledgments and references