Home

Awesome

Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis

Online Demo: Paper:

This is the official repository for the paper "Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis", which was accepted for publication at MICCAI 2023.

<img src="figures/model_overview.png" alt="pipeline" width="100%"/>

We propose a new way of explainability for zero-shot diagnosis prediction in the clinical domain. Instead of directly predicting a diagnosis, we prompt the model to classify the existence of descriptive observations, which a radiologist would look for on an X-Ray scan, and use the descriptor probabilities to estimate the likelihood of a diagnosis, making our model explainable by design. For this we leverage BioVil, a pretrained CLIP model for X-rays and apply contrastive observation-based prompting. We evaluate Xplainer on two chest X-ray datasets, CheXpert and ChestX-ray14, and demonstrate its effectiveness in improving the performance and explainability of zero-shot diagnosis. Authors: Chantal Pellegrini, Matthias Keicher, Ege Özsoy, Petra Jiraskova, Rickmer Braren, Nassir Navab

@article{pellegrini2023xplainer,
  title={Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis},
  author={Pellegrini, Chantal and Keicher, Matthias and {\"O}zsoy, Ege and Jiraskova, Petra and Braren, Rickmer and Navab, Nassir},
  journal={arXiv preprint arXiv:2303.13391},
  year={2023}
}

Setup:

  1. Clone this repository

    git clone https://github.com/ChantalMP/Xplainer
    
  2. Install requirements:

    • use Python 3.7
    • install requirements:
    conda create -n xplainer_env python=3.7
    conda activate xplainer_env
    pip install hi-ml-multimodal==0.1.2
    pip install -r requirements.txt
    
  3. Download data

    CheXpert:

    ChestXRay14:

Reproduce our results:

run

python -m inference --dataset chexpert

or

python -m inference --dataset chestxray14

Run demo locally:

run

python -m demo

Intended Use

This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.