Home

Awesome

<p align="center"> <img src="https://raw.githubusercontent.com/SeldonIO/alibi/master/doc/source/_static/Alibi_Explain_Logo_rgb.png" alt="Alibi Logo" width="50%"> </p> <!--- BADGES: START --->

Build Status Documentation Status codecov PyPI - Python Version PyPI - Package Version Conda (channel only) GitHub - License Slack channel

<!--- Hide platform for now as platform agnostic ---> <!--- [![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/alibi?logo=anaconda&style=flat)][#conda-forge-package]---> <!--- BADGES: END --->

Alibi is a Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.

<table> <tr valign="top"> <td width="50%" > <a href="https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html"> <br> <b>Anchor explanations for images</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/anchor_image.png"> </a> </td> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html"> <br> <b>Integrated Gradients for text</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ig_text.png"> </a> </td> </tr> <tr valign="top"> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html"> <br> <b>Counterfactual examples</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/cf.png"> </a> </td> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html"> <br> <b>Accumulated Local Effects</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ale.png"> </a> </td> </tr> </table>

Table of Contents

Installation and Usage

Alibi can be installed from:

With pip

With conda

To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:

conda install mamba -n base -c conda-forge

Usage

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the AnchorTabular explainer to illustrate the API:

from alibi.explainers import AnchorTabular

# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)

# explain an instance
explanation = explainer.explain(x)

The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary containing the explainer metadata and any hyperparameters and data is a dictionary containing everything related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies from method to method so we encourage the reader to become familiar with the types of methods supported.

Supported Methods

The following tables summarize the possible use cases for each method.

Model Explanations

MethodModelsExplanationsClassificationRegressionTabularTextImagesCategorical featuresTrain set requiredDistributed
ALEBBglobal
Partial DependenceBB WBglobal
PD VarianceBB WBglobal
Permutation ImportanceBBglobal
AnchorsBBlocalFor Tabular
CEMBB* TF/KeraslocalOptional
CounterfactualsBB* TF/KeraslocalNo
Prototype CounterfactualsBB* TF/KeraslocalOptional
Counterfactuals with RLBBlocal
Integrated GradientsTF/KeraslocalOptional
Kernel SHAPBBlocal <br></br>global
Tree SHAPWBlocal <br></br>globalOptional
Similarity explanationsWBlocal

Model Confidence

These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.

MethodModelsClassificationRegressionTabularTextImagesCategorical FeaturesTrain set required
Trust ScoresBB✔(1)✔(2)Yes
Linearity MeasureBBOptional

Key:

Prototypes

These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.

MethodClassificationRegressionTabularTextImagesCategorical FeaturesTrain set labels
ProtoSelectOptional

References and Examples

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@article{JMLR:v22:21-0017,
  author  = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
  title   = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {181},
  pages   = {1-7},
  url     = {http://jmlr.org/papers/v22/21-0017.html}
}