Awesome
CAT-XPLAIN
Khanal, Subash, Benjamin Brodie, Xin Xing, Ai-Ling Lin, and Nathan Jacobs. "Causality for Inherently Explainable Transformers: CAT-XPLAIN." arXiv e-prints (2022): arXiv-2206.
@article{khanal2022causality,
title={Causality for Inherently Explainable Transformers: CAT-XPLAIN},
author={Khanal, Subash and Brodie, Benjamin and Xing, Xin and Lin, Ai-Ling and Jacobs, Nathan},
journal={arXiv e-prints},
pages={arXiv--2206},
year={2022}
}
CAT-XPLAIN arxiv paper.pdf
This paper was accepted for spotlight presentation at the Explainable Artificial Intelligence for Computer Vision Workshop at CVPR 2022.
Summary
This project incorporates causal explaination capability for a vision transformer (ViT). It utilizes attention mechanism of ViT to attend and hence identify the most important regions of input that have highest Causal significance to the output. This project is motivated from a paper titled "Instance-wise Causal Feature Selection for Model Interpretation", 2021 by Pranoy et. al.
Their paper proposes to build a model agnostic post-hoc explainer model that is able to identify the most significant causal regions in the input space of each instance. Unlike the post-hoc explanation approach, we propose a small modification on the existing Transformer architecture so that the model is able to inherently identify the regions with highest causal strength while performing the task they are designed for. This leads to development of inherently interpretable Transformers with causal explaination capability, eliminating the need of additional post-hoc explainer.
Steps
git clone git@github.com:mvrl/CAT-XPLAIN.git
cd CAT-XPLAIN
- Create a virtual environment for the project.
conda env create -f environment.yml
conda activate CAT-XPLAIN
- Run the post-hoc experiments for MNIST,FMNIST, and CIFAR datasets.
sh ./MNIST_FMNIST_CIFAR/posthoc_run.sh
- Run Interpretable transformer for MNIST,FMNIST, and CIFAR datasets.
sh ./MNIST_FMNIST_CIFAR/expViT_run.sh
Acknowledgement
This code is adopted from Pranoy's repository Instance-wise Causal Feature Selection for Model Interpretation
Google colab demo
Contact
Subash Khanal
University of Washington in St. Louis
k.subash@wustl.edu