Notebooks for Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
-
Updated
Oct 3, 2019 - Jupyter Notebook
Notebooks for Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
Repo with my most popular kaggle notebooks. I've put a lot of effort into them back in the day, so they are highly curated and well documented.
A repo containing different data science applications illustrated with jupyter notebooks in Python
A brief notebook on Influence Function (IF) for classical generative models (e.g., k-NN, KDE, GMM)
A notebook exploring the paper "InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R. Caruana 2019).
Implementing text classification algorithms using the 20 newsgroups datasets, with python
Example notebooks on how to create concept based explanations of deep neural networks.
A docker environment and notebooks to experiment with the extraction of moore machines from RNN RL policies
Code accompanying a review article on interpretability and XAI. Includes examples for both simple (sparse regression) and sophisticated (concept bottlenecks) approaches, using notebooks that can be run in a few minutes.
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."