Home

Awesome

Affine Medical Image Registration with Coarse-to-Fine Vision Transformer (C2FViT)

This is the official Pytorch implementation of "Affine Medical Image Registration with Coarse-to-Fine Vision Transformer" (CVPR 2022), written by Tony C. W. Mok and Albert C. S. Chung.

plot

Prerequisites

This code was tested with Pytorch 1.7.1 and NVIDIA TITAN RTX GPU.

Training and testing scripts

Inference

Template-matching (MNI152):

python Test_C2FViT_template_matching.py --modelpath {model_path} --fixed ../Data/MNI152_T1_1mm_brain_pad_RSP.nii.gz --moving {moving_img_path}

Pairwise image registration:

python Test_C2FViT_pairwise.py --modelpath {model_path} --fixed {fixed_img_path} --moving {moving_img_path}

Pre-trained model weights

Pre-trained model weights can be downloaded with the links below:

Unsupervised:

Semi-supervised:

Train your own model

Step 0 (optional): Download the preprocessed OASIS dataset from https://github.com/adalca/medical-datasets/blob/master/neurite-oasis.md and place it under the Data folder.

Step 1: Replace /PATH/TO/YOUR/DATA with the path of your training data, e.g., ../Data/OASIS, and make sure imgs and labels are properly loaded in the training script.

Step 2: Run python {training_script}, see "Training and testing scripts" for more details.

Publication

If you find this repository useful, please cite:

Acknowledgment

Some codes in this repository are modified from PVT and ViT. The MNI152 brain template is provided by the FLIRT (FMRIB's Linear Image Registration Tool).

Keywords

Keywords: Affine registration, Coarse-to-Fine Vision Transformer, 3D Vision Transformer