Awesome
tf-explain
tf-explain implements interpretability methods as Tensorflow 2.x callbacks to ease neural network's understanding. See Introducing tf-explain, Interpretability for Tensorflow 2.0
Documentation: https://tf-explain.readthedocs.io
Installation
tf-explain is available on PyPi as an alpha release. To install it:
virtualenv venv -p python3.6
pip install tf-explain
tf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency to let you choose between full and standalone-CPU versions. Additionally to the previous install, run:
# For CPU or GPU
pip install tensorflow==2.2.0
Opencv is also a dependency. To install it, run:
# For CPU or GPU
pip install opencv-python
Quickstart
tf-explain offers 2 ways to apply interpretability methods. The full list of methods is the Available Methods section.
On trained model
The best option is probably to load a trained model and apply the methods on it.
# Load pretrained model or your own
model = tf.keras.applications.vgg16.VGG16(weights="imagenet", include_top=True)
# Load a sample image (or multiple ones)
img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))
img = tf.keras.preprocessing.image.img_to_array(img)
data = ([img], None)
# Start explainer
explainer = GradCAM()
grid = explainer.explain(data, model, class_index=281) # 281 is the tabby cat index in ImageNet
explainer.save(grid, ".", "grad_cam.png")
During training
If you want to follow your model during the training, you can also use it as a Keras Callback, and see the results directly in TensorBoard.
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
Available Methods
- Activations Visualization
- Vanilla Gradients
- Gradients*Inputs
- Occlusion Sensitivity
- Grad CAM (Class Activation Maps)
- SmoothGrad
- Integrated Gradients
Activations Visualization
Visualize how a given input comes out of a specific activation layer
from tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback
model = [...]
callbacks = [
ActivationsVisualizationCallback(
validation_data=(x_val, y_val),
layers_name=["activation_1"],
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/activations_visualisation.png" width="400" />
</p>
Vanilla Gradients
Visualize gradients importance on input image
from tf_explain.callbacks.vanilla_gradients import VanillaGradientsCallback
model = [...]
callbacks = [
VanillaGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/vanilla_gradients.png" width="200" />
</p>
Gradients*Inputs
Variant of Vanilla Gradients ponderating gradients with input values
from tf_explain.callbacks.gradients_inputs import GradientsInputsCallback
model = [...]
callbacks = [
GradientsInputsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/gradients_inputs.png" width="200" />
</p>
Occlusion Sensitivity
Visualize how parts of the image affects neural network's confidence by occluding parts iteratively
from tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback
model = [...]
callbacks = [
OcclusionSensitivityCallback(
validation_data=(x_val, y_val),
class_index=0,
patch_size=4,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<div align="center">
<img src="./docs/assets/occlusion_sensitivity.png" width="200" />
<p style="color: grey; font-size:small; width:350px;">Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)</p>
</div>
Grad CAM
Visualize how parts of the image affects neural network's output by looking into the activation maps
From Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/grad_cam.png" width="200" />
</p>
SmoothGrad
Visualize stabilized gradients on the inputs towards the decision
From SmoothGrad: removing noise by adding noise
from tf_explain.callbacks.smoothgrad import SmoothGradCallback
model = [...]
callbacks = [
SmoothGradCallback(
validation_data=(x_val, y_val),
class_index=0,
num_samples=20,
noise=1.,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/smoothgrad.png" width="200" />
</p>
Integrated Gradients
Visualize an average of the gradients along the construction of the input towards the decision
From Axiomatic Attribution for Deep Networks
from tf_explain.callbacks.integrated_gradients import IntegratedGradientsCallback
model = [...]
callbacks = [
IntegratedGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
n_steps=20,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
<p align="center">
<img src="./docs/assets/integrated_gradients.png" width="200" />
</p>
Roadmap
- Subclassing API Support
- Additional Methods
- Auto-generated API Documentation & Documentation Testing
Contributing
To contribute to the project, please read the dedicated section.