Home

Awesome

FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view CT Reconstruction

This is the official implementation of the paper "FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view CT Reconstruction" (arxiv, springer).

Updates

Data Preparation

The AAPM-Myo dataset can be downloaded from: CT Clinical Innovation Center (or the box link). Please walk through ./datasets/process_aapm.ipynb for more details on preparing the dataset. Please see here for simple data preprocessing.

Training & Inference

Please check train.sh for training script (or test.sh for inference script) once the data is well prepared. Specify the dataset path and other setting in the script, and simply run it in the terminal.

Notably, it is time-consuming to directly train sinogram-domain sub-network and image-domain sub-network of FreeSeed<sub>DUDO</sub> using a combination of loss functions simultaneously. A more efficient way, as in dudo_trainer.py, is to:

Requirements

- Linux Platform
- python==3.7.16
- torch==1.7.1+cu110  # depends on the CUDA version of your machine
- torchaudio==0.7.2
- torchvision==0.8.2+cu110
- torch-radon==1.0.0
- monai==1.0.1
- scipy==1.7.3
- einops==0.6.1
- opencv-python==4.7.0.72
- SimpleITK==2.2.1
- numpy==1.21.6
- pandas==1.3.5  # optional
- tensorboard==2.11.2  # optional
- wandb==0.15.2  # optional
- tqdm==4.65.0  # optional

Other Notes

We choose torch-radon toolbox to build our framework because it processes tomography really fast! For those who have problems installing torch-radon toolbox:

Citation

If you find our work and code helpful, please kindly cite the corresponding paper:

@inproceedings{ma2023freeseed,
  title={FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view CT Reconstruction},  
  author={Ma, Chenglong and Li, Zilong and Zhang, Yi and Zhang, Junping and Shan, Hongming}, 
  booktitle={Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023},
  year={2023}
}