Awesome
Pathology-Aware MRI to PET Cross-modal Translation with Diffusion Models
Official Pytorch Implementation of Paper - 🍝 PASTA: Pathology-Aware MRI to PET Cross-modal Translation with Diffusion Models
🎉 PASTA has been early-accepted at MICCAI 2024 (top 11%)!
<p align="center"> <img src="img/pasta.png" /> </p>Installation
- Create environment:
conda env create -n pasta --file requirements.yaml
- Activate environment:
conda activate pasta
Data
We used data from Alzheimer's Disease Neuroimaging Initiative (ADNI). Since we are not allowed to share our data, you would need to process the data yourself. Data for training, validation, and testing should be stored in separate HDF5 files, using the following hierarchical format:
- First level: A unique identifier, e.g. image ID.
- The second level always has the following entries:
- A group named
MRI/T1
, containing the T1-weighted 3D MRI data. - A group named
PET/FDG
, containing the 3D FDG PET data. - A dataset named
tabular
of size 6, containing a list of non-image clinical data, including age, gender, education, MMSE, ADAS-Cog-13, ApoE4. - A string attribute
DX
containing the diagnosis labels:CN
,Dementia
orMCI
, if available. - A scalar attribute
RID
with the patient ID, if available. - A string attribute
VISCODE
with ADNI's visit code.
- A group named
Finally, the HDF5 file should also contain the following meta-information in a separate group named stats
:
/stats/tabular Group
/stats/tabular/columns Dataset {6}
/stats/tabular/mean Dataset {6}
/stats/tabular/stddev Dataset {6}
They are the names of the features in the tabular data, their mean, and standard deviation.
Usage
The package uses PyTorch. To train and test PASTA, execute the train_mri2pet.py
script.
The configuration file of the command arguments is stored in src/config/pasta_mri2pet.yaml
.
The essential command line arguments are:
--data_dir
: Path prefix to HDF5 files containing either train, validation, or test data.--results_folder
: Path to save all the training/testing output.--model_cycling
: True to conduct cycle exchange consistency.--eval_mode
: False for training mode and True for evaluation mode. The model for evaluation is specified inresults_folder/model.pt
.--synthesis
: True to save all generated images during evaluation.
After specifying the config file, simply start training/evaluation by:
python train_mri2pet.py
Contacts
For any questions, please contact: Yitong Li (yi_tong.li@tum.de)
Acknowlegements
The codebase is developed based on lucidrains/denoising-diffusion-pytorch and openai/guided-diffusion.
If you find this repository useful, please consider giving a star 🌟 and citing the paper:
@InProceedings{Li2024pasta,
author="Li, Yitong
and Yakushev, Igor
and Hedderich, Dennis M.
and Wachinger, Christian",
title="PASTA: Pathology-Aware MRI to PET Cross-Modal Translation with Diffusion Models",
booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="529--540",
isbn="978-3-031-72104-5"
}