Home

Awesome

CXR-CLIP

This is an official Pytorch Implementation of "CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training" [arxiv]

Environment setup

We have experimented the implementation on the following enviornment.

pip install -r requirements.txt

Prepare dataset

Datasets we used are as follows:

DatasetDownloadComment
MIMIC-CXRLinkofficial split
CheXpertLinkofficial split for train and val, and chexpert_5x200 from GLoRIA for test
ChestX-ray14Linknot used for test
VinDr-CXRLinkofficial split for test, and random split for train and val
RSNA-PneumoniaLinksame split as GLoRIA
SIIM-PneumothoraxLinksame split as GLoRIA
OpenILinkall frontal images are used for evaluation

For more details, please refer to data preparation.

Pre-trained model checkpoint

We trained Resnet50 and SwinTiny models with three dataset compositions.
MIMIC-CXR (M), MIMIC-CXR + CheXpert (M,C), MIMIC-CXR + CheXpert + ChestX-ray14 (M,C,C14)

model / datasetMM,CM,C,C14
ResNet50LinkLinkLink
SwinTinyLinkLinkLink

Pre-Train model

command line

Evaluation

Zero-shot Evaluation

Fine-tuned Classifier (linear probing)

# train
python finetune.py --config-name finetune_10 hydra.run.dir=${SAVE_DIR} data_train=rsna_pneumonia data_valid=rsna_pneumonia model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 10%
python finetune.py hydra.run.dir=${SAVE_DIR} data_train=rsna_pneumonia data_valid=rsna_pneumonia model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 100%
# evaluate
python evaluate_finetune.py data_test=rsna_pneumonia test.checkpoint=${FINETUNED_CKPT_PATH/model-best.tar}
# train
python finetune.py --config-name finetune_10 hydra.run.dir=${SAVE_DIR} data_train=siim_pneumothorax data_valid=siim_pneumothorax model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 10%
python finetune.py hydra.run.dir=${SAVE_DIR} data_train=siim_pneumothorax data_valid=siim_pneumothorax model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 100%
# evaluate
python evaluate_finetune.py data_test=siim_pneumothorax test.checkpoint=${FINETUNED_CKPT_PATH/model-best.tar}
# train
python finetune.py --config-name finetune_10 hydra.run.dir=${SAVE_DIR} data_train=vindr_cxr data_valid=vindr_cxr model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 10%
python finetune.py hydra.run.dir=${SAVE_DIR} data_train=vindr_cxr data_valid=vindr_cxr model.load_backbone_weights=${CKPT_PATH/model-best.tar} # 100%
# evaluate
python evaluate_finetune.py data_test=vindr_cxr test.checkpoint=${FINETUNED_CKPT_PATH/model-best.tar}

Citation

@incollection{You_2023,
	doi = {10.1007/978-3-031-43895-0_10},
	url = {https://doi.org/10.1007%2F978-3-031-43895-0_10},
	year = 2023,
	publisher = {Springer Nature Switzerland},
	pages = {101--111},
	author = {Kihyun You and Jawook Gu and Jiyeon Ham and Beomhee Park and Jiho Kim and Eun K. Hong and Woonhyuk Baek and Byungseok Roh},
	title="CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training",
	booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023",
}

License

CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training © 2023 is licensed under CC BY-NC 4.0

Contact for Issues

Kihyun You, kihyun.you@soombit.ai
Jawook Gu, jawook.gu@soombit.ai