Notebooks for Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
-
Updated
Oct 3, 2019 - Jupyter Notebook
Notebooks for Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Repo with my most popular kaggle notebooks. I've put a lot of effort into them back in the day, so they are highly curated and well documented.
This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
A repo containing different data science applications illustrated with jupyter notebooks in Python
A notebook exploring the paper "InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R. Caruana 2019).
A brief notebook on Influence Function (IF) for classical generative models (e.g., k-NN, KDE, GMM)
Implementing text classification algorithms using the 20 newsgroups datasets, with python
Example notebooks on how to create concept based explanations of deep neural networks.
A docker environment and notebooks to experiment with the extraction of moore machines from RNN RL policies
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."