Skip to content
Code for AAAI 2018 accepted paper: "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability"
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
LICENSE Move tested autograd implementation to public repo Dec 8, 2017
README.md
datasets.py Move tested autograd implementation to public repo Dec 8, 2017
model.py
requirements.txt Move tested autograd implementation to public repo Dec 8, 2017
test.py Move tested autograd implementation to public repo Dec 8, 2017
train.py

README.md

NumPy Autograd Implementation

This repository includes a toy signal-and-noise HMM dataset and a basic implementation of the Tree-regularized GRU model in NumPy Autograd. Please find the ArXiv copy here: https://arxiv.org/abs/1711.06178.

For more on NumPy Autograd, see https://github.com/HIPS/autograd.

Setup

Create a new conda environment and activate it.

conda create -n interpret python=2
source activate interpret

Install the necessary libraries.

pip install -r requirements.txt

Instructions

First, generate the toy dataset. This will create a directory ./data and populate it with Pickle files.

python dataset.py

Then, you can train the tree-regularized GRU. This will create a directory ./trained_models and dump the trained model and a PDF of the final decision tree. You can set the regularization strength as a command line argument. apl stands for average path length.

python train.py --strength 1000.0

We can apply the trained model on a held-out test set. This will print out AUC statistics.

python test.py

Please reach out with any concerns or potential bugs.

You can’t perform that action at this time.