Home

Awesome

<div align="center"> <img src="docs/assets/banner.png" width="75%" alt="Xplique" align="center" /> </div> <br> <div align="center"> <a href="#"> <img src="https://img.shields.io/badge/Python-3.7, 3.8, 3.9, 3.10-efefef"> </a> <a href="https://github.com/deel-ai/xplique/actions/workflows/python-lints.yml"> <img alt="PyLint" src="https://github.com/deel-ai/xplique/actions/workflows/python-lints.yml/badge.svg"> </a> <a href="https://github.com/deel-ai/xplique/actions/workflows/python-tests.yml"> <img alt="Tox" src="https://github.com/deel-ai/xplique/actions/workflows/python-tests.yml/badge.svg"> </a> <a href="https://github.com/deel-ai/xplique/actions/workflows/python-publish.yml"> <img alt="Pypi" src="https://github.com/deel-ai/xplique/actions/workflows/python-publish.yml/badge.svg"> </a> <a href="https://pepy.tech/project/xplique"> <img alt="Pepy" src="https://static.pepy.tech/badge/xplique"> </a> <a href="#"> <img src="https://img.shields.io/badge/License-MIT-efefef"> </a> </div> <br> <p align="center"> 🦊 <b>Xplique</b> (pronounced <i>\ɛks.plik\</i>) is a Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models. Originally built for Tensorflow's model it also works for PyTorch models partially. <br> <a href="https://deel-ai.github.io/xplique/">📘 <strong>Explore Xplique docs</strong></a> | <a href="https://deel-ai.github.io/xplique/latest/tutorials/"><strong>Explore Xplique tutorials</strong> 🔥</a> <br> <br> <a href="https://deel-ai.github.io/xplique/latest/api/attributions/api_attributions/">Attributions</a> · <a href="https://deel-ai.github.io/xplique/latest/api/concepts/cav/">Concept</a> · <a href="https://deel-ai.github.io/xplique/latest/api/feature_viz/feature_viz/">Feature Visualization</a> · <a href="https://deel-ai.github.io/xplique/latest/api/attributions/metrics/api_metrics/">Metrics</a> </p>

The library is composed of several modules, the Attributions Methods module implements various methods (e.g Saliency, Grad-CAM, Integrated-Gradients...), with explanations, examples and links to official papers. The Feature Visualization module allows to see how neural networks build their understanding of images by finding inputs that maximize neurons, channels, layers or compositions of these elements. The Concepts module allows you to extract human concepts from a model and to test their usefulness with respect to a class. Finally, the Metrics module covers the current metrics used in explainability. Used in conjunction with the Attribution Methods module, it allows you to test the different methods or evaluate the explanations of a model.

<p align="center" width="100%"> <img width="95%" src="./docs/assets/modules.png"> </p> <br>

🔥 Tutorials

<details> <summary>We propose some Hands-on tutorials to get familiar with the library and its api:</summary> <p align="center" width="100%"> <a href="https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2"> <img width="95%" src="./docs/assets/attributions.jpeg"> </a> </p> <p align="center" width="100%"> <a href="https://colab.research.google.com/drive/1WEpVpFSq-oL1Ejugr8Ojb3tcbqXIOPBg"> <img width="95%" src="./docs/assets/metrics.jpeg"> </a> </p> <p align="center" width="100%"> <a href="https://colab.research.google.com/drive/1iuEz46ZjgG97vTBH8p-vod3y14UETvVE"> <img width="95%" src="./docs/assets/concepts.jpeg"> </a> </p> <p align="center" width="100%"> <a href="https://colab.research.google.com/drive/1jmyhb89Bdz7H4G2KfK8uEVbSC-C_aht_"> <img width="95%" src="./docs/assets/craft.jpeg"> </a> </p> <p align="center" width="100%"> <a href="https://colab.research.google.com/drive/1st43K9AH-UL4eZM1S4QdyrOi7Epa5K8v"> <img width="95%" src="./docs/assets/feature_viz.jpeg"> </a> </p>

You can find a certain number of other practical tutorials just here. This section is actively developed and more contents will be included. We will try to cover all the possible usage of the library, feel free to contact us if you have any suggestions or recommendations towards tutorials you would like to see.

</details>

🚀 Quick Start

Xplique requires a version of python higher than 3.7 and several libraries including Tensorflow and Numpy. Installation can be done using Pypi:

pip install xplique

Now that Xplique is installed, here are basic examples of what you can do with the available modules.

<details> <summary><b>Attributions Methods</b></summary> Let's start with a simple example, by computing Grad-CAM for several images (or a complete dataset) on a trained model.
from xplique.attributions import GradCAM

# load images, labels and model
# ...

explainer = GradCAM(model)
explanations = explainer.explain(images, labels)
# or just `explainer(images, labels)`

All attributions methods share a common API described in the attributions API documentation.

</details> <details> <summary><b>Attributions Metrics</b></summary>

In order to measure if the explanations provided by our method are faithful (it reflects well the functioning of the model) we can use a fidelity metric such as Deletion

from xplique.attributions import GradCAM
from xplique.metrics import Deletion

# load images, labels and model
# ...

explainer = GradCAM(model)
explanations = explainer(inputs, labels)
metric = Deletion(model, inputs, labels)

score_grad_cam = metric(explanations)

All attributions metrics share a common API. You can find out more about it here.

</details> <details> <summary><b>Concepts Extraction</b></summary>

CAV

Concerning the concept-based methods, we can for example extract a concept vector from a layer of a model. In order to do this, we use two datasets, one containing inputs containing the concept: positive_samples, the other containing other entries which do not contain the concept: negative_samples.

from xplique.concepts import Cav

# load a model, samples that contain a concept
# (positive) and samples who don't (negative)
# ...

extractor = Cav(model, 'mixed3')
concept_vector = extractor(positive_samples,
                           negative_samples)

More information on CAV here and on TCAV here.

CRAFT

Use Craft to investigate a single class.

from xplique.concepts import CraftTf as Craft

# Cut the model in two parts: g and h
# Create a Craft concept extractor from these 2 models
craft = Craft(input_to_latent_model = g,
              latent_to_logit_model = h)

# Use Craft to compute the concepts for a specific class
craft.fit(images_preprocessed, class_id=rabbit_class_id)

# Compute Sobol indices to understand which concept matters
importances = craft.estimate_importance()

# Display those concepts by showing the 10 best crops for each concept
craft.plot_concepts_crops(nb_crops=10)

More information in the CRAFT documentation.

</details> <details> <summary><b>Feature Visualization</b></summary>

Finally, in order to find an image that maximizes a neuron and at the same time a layer, we build two objectives that we combine together. We then call the optimizer which returns our images

from xplique.features_visualizations import Objective
from xplique.features_visualizations import optimize

# load a model...

neuron_obj = Objective.neuron(model, "logits", 200)
channel_obj = Objective.layer(model, "mixed3", 10)

obj = neuron_obj + 2.0 * channel_obj
images, obj_names = optimize(obj)

Want to know more ? Check the Feature Viz documentation

</details> <details> <summary><b>PyTorch with Xplique</b></summary>

Even though the library was mainly designed to be a Tensorflow toolbox we have been working on a very practical wrapper to facilitate the integration of your PyTorch models into Xplique's framework!

import torch

from xplique.wrappers import TorchWrapper
from xplique.attributions import Saliency
from xplique.metrics import Deletion

# load images, targets and model
# ...

device = 'cuda' if torch.cuda.is_available() else 'cpu'
wrapped_model = TorchWrapper(torch_model, device)

explainer = Saliency(wrapped_model)
explanations = explainer(inputs, targets)

metric = Deletion(wrapped_model, inputs, targets)
score_saliency = metric(explanations)

Want to know more ? Check the PyTorch documentation

</details>

📦 What's Included

There are 4 modules in Xplique, Attribution methods, Attribution metrics, Concepts, and Feature visualization. In particular, the attribution methods module supports a huge diversity of tasks:Classification, Regression, Object Detection, and Semantic Segmentation. For diverse data types: Images, Time Series, and Tabular data. The methods compatible with such task are highlighted in the following table:

<details> <summary><b>Table of attributions available</b></summary>
Attribution MethodType of ModelSourceImagesTime Series and Tabular DataTutorial
DeconvolutionTFPaperC✔️ OD❌ SS❌C✔️ R✔️Open In Colab
Grad-CAMTFPaperC✔️ OD❌ SS❌Open In Colab
Grad-CAM++TFPaperC✔️ OD❌ SS❌Open In Colab
Gradient InputTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
Guided BackpropTFPaperC✔️ OD❌ SS❌C✔️ R✔️Open In Colab
Integrated GradientsTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
Kernel SHAPTF, PyTorch**, Callable*PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
LimeTF, PyTorch**, Callable*PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
OcclusionTF, PyTorch**, Callable*PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
RiseTF, PyTorch**, Callable*PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
SaliencyTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
SmoothGradTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
SquareGradTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
VarGradTF, PyTorch**PaperC✔️ OD✔️ SS✔️C✔️ R✔️Open In Colab
Sobol AttributionTF, PyTorch**PaperC✔️ OD✔️ SS✔️🔵Open In Colab
Hsic AttributionTF, PyTorch**PaperC✔️ OD✔️ SS✔️🔵Open In Colab
FORGrad enhancementTF, PyTorch**PaperC✔️ OD✔️ SS✔️Open In Colab

TF : Tensorflow compatible

C : Classification | R : Regression | OD : Object Detection | SS : Semantic Segmentation (SS)

* : See the Callable documentation

** : See the Xplique for PyTorch documentation, and the PyTorch models: Getting started notebook.

✔️ : Supported by Xplique | ❌ : Not applicable | 🔵 : Work in Progress

</details> <details> <summary><b>Table of attribution's metric available</b></summary>
Attribution MetricsType of ModelPropertySource
MuFidelityTF, PyTorch**FidelityPaper
DeletionTF, PyTorch**FidelityPaper
InsertionTF, PyTorch**FidelityPaper
Average StabilityTF, PyTorch**StabilityPaper
MeGeTF, PyTorch**RepresentativityPaper
ReCoTF, PyTorch**ConsistencyPaper
(WIP) e-robustness

TF : Tensorflow compatible

** : See the Xplique for PyTorch documentation, and the PyTorch models: Getting started notebook.

</details> <details> <summary><b>Table of concept methods available</b></summary>
Concepts methodType of ModelSourceTutorial
Concept Activation Vector (CAV)TFPaper
Testing CAV (TCAV)TFPaper
CRAFT TensorflowTFPaperOpen In Colab
CRAFT PyTorchPyTorch**PaperOpen In Colab
(WIP) Robust TCAV
(WIP) Automatic Concept Extraction (ACE)

TF : Tensorflow compatible

** : See the Xplique for Pytorch documentation, and the PyTorch's model: Getting started<sub> Open In Colab </sub> notebook

</details> <details> <summary><b>Table of Feature Visualization methods available</b></summary>
Feature Visualization (Paper)Type of ModelDetails
NeuronsTFOptimizes for specific neurons
LayerTFOptimizes for specific layers
ChannelTFOptimizes for specific channels
DirectionTFOptimizes for specific vector
Fourrier PreconditioningTFOptimize in Fourier basis (see preconditioning)
Objective combinationTFAllows to combine objectives
MaCoTFFixed Magnitude optimisation, see Paper

TF : Tensorflow compatible

</details>

👍 Contributing

Feel free to propose your ideas or come and contribute with us on the Xplique toolbox! We have a specific document where we describe in a simple way how to make your first pull request: just here.

👀 See Also

This library is one approach of many to explain your model. We don't expect it to be the perfect solution, we create it to explore one point in the space of possibilities.

<details> <summary> Other interesting tools to explain your model: </summary> </details> <details> <summary>To learn more about Explainable AI in general: </summary> </details> <details> <summary> More from the DEEL project: </summary> </details>

🙏 Acknowledgments

<div align="right"> <picture> <source media="(prefers-color-scheme: dark)" srcset="docs/assets/deel_dark.png" width="25%" align="right"> <source media="(prefers-color-scheme: light)" srcset="docs/assets/deel_light.png" width="25%" align="right"> <img alt="DEEL Logo" src="docs/assets/deel_dark.png" width="25%" align="right"> </picture> </div> This project received funding from the French ”Investing for the Future – PIA3” program within the Artificial and Natural Intelligence Toulouse Institute (ANITI). The authors gratefully acknowledge the support of the <a href="https://www.deel.ai/"> DEEL </a> project.

👨‍🎓 Creators

This library was started as a side-project by Thomas FEL who is currently a graduate student at the Artificial and Natural Intelligence Toulouse Institute under the direction of Thomas SERRE. His thesis work focuses on explainability for deep neural networks.

He then received help from some members of the <a href="https://www.deel.ai/"> DEEL </a> team to enhance the library namely from Lucas Hervier and Antonin Poché.

🗞️ Citation

If you use Xplique as part of your workflow in a scientific publication, please consider citing the 🗞️ Xplique official paper:

@article{fel2022xplique,
  title={Xplique: A Deep Learning Explainability Toolbox},
  author={Fel, Thomas and Hervier, Lucas and Vigouroux, David and Poche, Antonin and Plakoo, Justin and Cadene, Remi and Chalvidal, Mathieu and Colin, Julien and Boissin, Thibaut and Bethune, Louis and Picard, Agustin and Nicodeme, Claire 
          and Gardes, Laurent and Flandin, Gregory and Serre, Thomas},
  journal={Workshop on Explainable Artificial Intelligence for Computer Vision (CVPR)},
  year={2022}
}

📝 License

The package is released under <a href="https://choosealicense.com/licenses/mit"> MIT license</a>.