Home

Awesome

<h2 align="center"> <a href="">RecLMIS: Cross-Modal Conditioned Reconstruction for Language-guided Medical Image Segmentation</a></h2> <a src="https://img.shields.io/badge/cs.CV-2312.09278-b31b1b?logo=arxiv&logoColor=red" href="https://arxiv.org/pdf/2404.02845"> <img src="https://img.shields.io/badge/cs.CV-2404.02845-b31b1b?logo=arxiv&logoColor=red"> </a> <h5 align="center"> 🍒🍒🍒 This paper was accepted by IEEE Transactions on Medical Imaging (TMI). If you like our project, please give us a star ⭐ on GitHub for latest update.

😮 Hightlights

<p align="center"> <img src="assets/intro.png" style="margin-bottom: 0.2;"/> <p>

🔥 Updates

Contents

🛠️Installation

  1. Clone this repository and navigate to RecLMIS folder
git clone https://github.com/ShawnHuang497/RecLMIS.git
cd RecLMIS
  1. Install Package
conda create -n reclmis python=3.9 -y
conda activate reclmis
pip install --upgrade pip 
pip install -r requirements.txt
  1. Download pretrained CLIP: ViT-B-32.pt, and put it in the folder nets/

🗃️Dataset

  1. You can refer to MedPLIB to download the dataset.

  2. If your dataset is not in current path or disk, you can modify the path in Config_xxx.py file or ues ln -s {old_path} {./datasets} to create a soft link to link the data to current path.

📀Train

sh train.sh 0 Config_xxx

🥭 Test

python test.py --cfg_path Config_xxx --test_session session_09.25_00h27 --gpu {0} --test_vis {True}

👍Acknowledgement

This code is based on LVIT, ViT and CLIP.

🔒License

✏️Citation

If you find our paper and code useful in your research, please consider giving a star and citation.

@article{huang2024cross,
  title={Cross-Modal Conditioned Reconstruction for Language-guided Medical Image Segmentation},
  author={Huang, Xiaoshuang and Li, Hongxiang and Cao, Meng and Chen, Long and You, Chenyu and An, Dong},
  journal={arXiv preprint arXiv:2404.02845},
  year={2024}
}