Home

Awesome

MedCoSS

This is the official Pytorch implementation of our CVPR 2024 paper (Highlight) "Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning".

<div align="center"> <img width="100%" alt="MedCoSS illustration" src="github/Overview.png"> </div>

Requirements

CUDA 11.5<br /> Python 3.8<br /> Pytorch 1.11.0<br /> CuDNN 8.3.2.44

Data Preparation

Pre-processing

Pre-training

Pre-trained Model

Fine-tuning

To do

Citation

If this code is helpful for your study, please cite:

@article{ye2024medcoss,
  title={Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning},
  author={Ye, Yiwen and Xie, Yutong and Zhang, Jianpeng and Chen, Ziyang and Wu, Qi and Xia, Yong},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={11114-11124},
  year={2024},
}

Acknowledgements

The whole framework is based on MAE, Uni-Perceiver, and MGCA.

Contact

Yiwen Ye (ywye@mail.nwpu.edu.cn)