Skip to content
forked from SeldonIO/alibi

Algorithms for monitoring and explaining machine learning models

License

Notifications You must be signed in to change notification settings

mikewlange/alibi

 
 

Repository files navigation

Alibi Logo

Build Status Documentation Status Python version PyPI version GitHub Licence Slack channel

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

Goals

  • Provide high quality reference implementations of black-box ML model explanation algorithms
  • Define a consistent API for interpretable ML methods
  • Support multiple use cases (e.g. tabular, text and image data classification, regression)
  • Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods

Installation

Alibi can be installed from PyPI:

pip install alibi

This will install alibi with all its dependencies:

  beautifulsoup4
  numpy
  Pillow
  pandas
  requests
  scikit-learn
  spacy
  scikit-image
  tensorflow

To run all the example notebooks, you may additionally run pip install alibi[examples] which will install the following:

  seaborn
  Keras

Supported algorithms

Black-box model explanaton

Model confidence metrics

Example outputs

Anchor method applied to the InceptionV3 model trained on ImageNet:

Prediction: Persian Cat Anchor explanation
Persian Cat Persian Cat Anchor

Contrastive Explanation method applied to a CNN trained on MNIST:

Prediction: 4 Pertinent Negative: 9 Pertinent Positive: 4
mnist_orig mnsit_pn mnist_pp

Trust scores applied to a softmax classifier trained on MNIST:

trust_mnist

About

Algorithms for monitoring and explaining machine learning models

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.8%
  • Makefile 0.2%