Awesome
DeepTag
This is the project page of the CVPR 2021 oral paper DeepTag: [CVPR_2021] [arXiv] [Video].
DeepTag: An Unsupervised Deep Learning Method for Motion Tracking on Cardiac Tagging Magnetic Resonance Images.
We propose a fully unsupervised deep learning-based method for regional myocardium motion estimation on cardiac tagging magnetic resonance images (t-MRI). We incorporate the concept of motion decomposition and recomposition in our framework and achieve significant superior performance over traditional methods.
<div align=center><img width="650" height="300" src="https://github.com/DeepTag/cardiac_tagging_motion_estimation/blob/main/figures/MT_tmri.png"/></div>If you find this code useful in your research, please consider citing:
@InProceedings{Ye_2021_CVPR,
author = {Ye, Meng and Kanski, Mikael and Yang, Dong and Chang, Qi and Yan, Zhennan and Huang, Qiaoying and Axel, Leon and Metaxas, Dimitris},
title = {DeepTag: An Unsupervised Deep Learning Method for Motion Tracking on Cardiac Tagging Magnetic Resonance Images},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {7261-7271}
Supplementary results
- Tagging image sequence registration results: (upper-left) tagging image sequence; (upper-right) forward registration results; (bottom-left) backward registration results; (bottom-right) Lagrangian registration results. The blue grid lines are to aid visual inspection.
- Landmarks tracking results: red is ground truth, green is prediction. (left) basal slice (On the septum wall, which is between RV and LV, tags may apparently disappear in some frames, due to through-plane motion, as do the ground truth landmarks, but we still show the predicted landmarks on the closest position); (middle) middle slice; (right) apex slice. Note that our method can even track the motion on the last several frames very accurately in spite of the significant image quality degradation.
- Interframe (INF) motion fields and Lagrangian motion fields represented as a "quiver" form. (left) INF motions; (right) Lagrangian motions. Note that our method accurately captures the back-and-forth motion (left) in the left ventricle myocardium wall during systole. Also note that our method can even track the right ventricle's motion accurately.
- Lagrangian motion fields: (left) x component; (right) y component.
- Tag grid tracking results on the short axis view: (left) tagging image sequence; (middle) warped virtual tag grid by the Lagrangian motion field; (right) virtual tag grid superimposed on tagging images. Note that the virtual tag grid has been aligned with the tag pattern at time t=0. As time goes on, the virtual tag grid is deformed by the predicted Lagrangian motion field and follows the underlying tag pattern in the images very well.
- Tag grid tracking results on the long axis view: (upper) tagging image sequence; (bottom) virtual tag grid superimposed on tagging images. (left) 2 chamber view; (middle) 3 chamber view; (right) 4 chamber view. Our method can track local myocardium motion on both short axis and long axis views, by which we could recover the 3D motion field of the heart wall.
Acknowledgments
Our code implementation borrows heavily from VoxelMorph.