iNNvestigate neural networks!
In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand neural networks better several methods were proposed, e.g., Saliency, Deconvnet, GuidedBackprop, SmoothGrad, IntergratedGradients, LRP, PatternNet&-Attribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods. Our goal is to make analyzing neural networks' predictions easy!
If you use this code please star the repository and cite the following paper:
"iNNvestigate neural networks!"(http://arxiv.org/abs/1808.04260) by Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
iNNvestigate can be installed with the following commands. The library is based on Keras and therefore requires a supported Keras-backend (Currently only Python 3.5, Tensorflow 1.8 and Cuda 9.x are supported.):
pip install git+https://github.com/albermax/innvestigate # Installing Keras backend pip install [tensorflow | theano | cntk]
To use the example scripts and notebooks one additionally needs to install the package matplotlib:
pip install matplotlib
The library's tests can be executed via:
git clone https://github.com/albermax/innvestigate.git cd innvestigate python setup.py test
The library was developed and tested on a Linux platform with Python 3.5, Tensorflow 1.8 and Cuda 9.x.
Usage and Examples
The iNNvestigate library contains implementations for the following methods:
- input: Returns the input.
- random: Returns random Gaussian noise.
The intention behind iNNvestigate is to make it easy to use analysis methods, but it is not to explain the underlying concepts and assumptions. Please, read the according publication(s) when using a certain method and when publishing please cite the according paper(s) (as well as the iNNvestigate paper). Thank you!
All the available methods have in common that they try to analyze the output of a specific neuron with respect to input to the neural network. Typically one analyses the neuron with the largest activation in the output layer. For example, given a Keras model, one can create a 'gradient' analyzer:
import innvestigate model = create_keras_model() analyzer = innvestigate.create_analyzer("gradient", model)
and analyze the influence of the neural network's input on the output neuron by:
analysis = analyzer.analyze(inputs)
To analyze a neuron with the index i, one can use the following scheme:
analyzer = innvestigate.create_analyzer("gradient", model, neuron_selection_mode="index") analysis = analyzer.analyze(inputs, i)
Some methods like PatternNet and PatternAttribution are data-specific and need to be trained. Given a data set with train and test data, this can be done in the following way:
import innvestigate analyzer = innvestigate.create_analyzer("pattern.net", model) analyzer.fit(X_train) analysis = analyzer.analyze(X_test)
In the directory examples one can find different examples as Python scripts and as Jupyter notebooks:
- Introduction to iNNvestigate: shows how to use iNNvestigate.
- Comparing methods on MNIST: shows how to train and compare analyzers on MNIST.
- Comparing output neurons on MNIST: shows how to analyze the prediction of different classes on MNIST.
- Comparing methods on ImageNet: shows how to compare analyzers on ImageNet.
- Comparing networks on ImageNet: shows how to compare analyzes for different networks on ImageNet.
- Sentiment Analysis.
- Development with iNNvestigate: shows how to develop with iNNvestigate.
To use ImageNet examples one must download example pictures first (script).
... can be found here: https://innvestigate.readthedocs.io/en/latest/
If you would like to add your analysis method please get in touch with us!