DrWhy is the collection of tools for Explainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.
Please, note that DrWhy is under rapid development and is still maturing. If you are looking for a stable solution, please use the mature DALEX package.
Visual Exploration, Explanation and Debugging
The unified grammar beyond DrWhy.AI universe is described in the Predictive Models: Visual Exploration, Explanation and Debugging book.
Lifecycle for Predictive Models
It takes a village to raise a
1. Data Acquisition
- dataMaid; A Suite of Checks for Identification of Potential Errors in a Data Frame as Part of the Data Screening Process
- ggplot2; System for declaratively creating graphics, based on The Grammar of Graphics.
2. Feature Selection
- Model Agnostic Variable Importance Scores. Surrogate learning = Train an elastic model and measure feature importance in such model. See DALEX, Model Class Reliance MCR
- vip Variable importance plots
3. Feature Engineering
- SAFE Surrogate learning = Train an elastic model and extract feature transformations.
- xspliner Using surrogate black-boxes to train interpretable spline based additive models
- factorMerger Set of tools for factors merging paper
- ingredients Set of tools for model level feature effects and feature importance.
4. Model Tuning
5. Model Validation
- auditor model verification, validation, and error analysis vigniette
- DALEX Descriptive mAchine Learning EXplanations
- iml; interpretable machine learning R package
- randomForestExplainer A set of tools to understand what is happening inside a Random Forest
- survxai Explanations for survival models paper
6. Model Deployment
- breakDown, pyBreakDown and breakDown2 Model Agnostic Explainers for Individual Predictions (with interactions)
- ceterisParibus, pyCeterisParibus, ceterisParibusD3 and ceterisParibus2 Ceteris Paribus Plots (What-If plots) for explanations of a single observation
- localModel and live LIME-like explanations with interpretable features based on Ceteris Paribus curves.
- lime; Local Interpretable Model-Agnostic Explanations (R port of original Python package)
- shapper An R wrapper of SHAP python library
- modelDown modelDown generates a website with HTML summaries for predictive models
7. Model Maintenance
- drifter Concept Drift and Concept Shift Detection for Predictive Models
- archivist A set of tools for datasets and plots archiving paper
DrWhy.AI indicator panel
Active development and maintenance
These packages are actively developed and have active maintainer.
- archivist (maintainer: pbiecek)
- DALEX (maintainer: pbiecek)
- auditor (maintainer: agosiewska)
- survxai (maintainer: agosiewska)
- shapper (maintainer: agosiewska)
- iBreakDown (maintainer: pbiecek)
- ingredients (maintainer: pbiecek)
- drifter (maintainer: pbiecek)
- localModel (maintainer: mstaniak)
- modelDown (maintainer: magda-tatarynowicz)
Experimental pre-seed phase (under active development)
- EIX (maintainer: ekarbowiak)
- xspliner (maintainer: krystian8207)
- pyDALEX (maintainer: magda-tatarynowicz)
- SAFE (maintainer: olagacek)
- pyCeterisParibus (maintainer: kmichael08)
- ceterisParibusD3 (maintainer: flaminka)
Experimental or without maintenance (looking for maintainer!!!)
These packages contain useful features, are still in use but we are looking for an active maintainer.
In the sunset phase, without maintenance
Key features from these packages are copied to another packages.
- ceterisParibus (development moved to
- ceterisParibus2 (development moved to
- DALEX2 (development moved to
- breakDown (development moved to
- live (development moved to
Family of Model Explainers
Architecture of DrWhy
DrWhy works on fully trained predictive models. Models can be created with any tool.
DALEX2 package to wrap model with additional matadata required for explanations, like validation data, predict function etc.
Explainers for predictive models can be created with model agnostic or model specific functions implemented in various packages.