Skip to content
Algorithms for monitoring and explaining machine learning models
Branch: master
Clone or download
jklaise and arnaudvl Add additional Python versions to CI (#73)
* Add additional Python versions to CI

* Update test to compare dictionary keys as a set

* Download spacy corpus upfront to speed up AnchorText testing

* Extend test timeout for Travis
Latest commit 08c5fdd May 9, 2019

README.md

Alibi Logo

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

Goals

  • Provide high quality reference implementations of black-box ML model explanation algorithms
  • Define a consistent API for interpretable ML methods
  • Support multiple use cases (e.g. tabular, text and image data classification, regression)
  • Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods

Installation

Alibi can be installed from PyPI:

pip install alibi

Examples

Anchor method applied to the InceptionV3 model trained on ImageNet:

Prediction: Persian Cat Anchor explanation
Persian Cat Persian Cat Anchor

Contrastive Explanation method applied to a CNN trained on MNIST:

Prediction: 4 Pertinent Negative: 9 Pertinent Positive: 4
mnist_orig mnsit_pn mnist_pp

Trust scores applied to a softmax classifier trained on MNIST:

trust_mnist

You can’t perform that action at this time.