Pycon.co lightning talk "Demystifying Machine Learning"
Jupyter Notebook Python
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
figures
.gitignore
Demystifying_Machine_Learning_using_LIME.ipynb
README.md
demystifying_machine_learning_using_lime.pdf
phishing.csv.zip

README.md

Short Talk - Demystifying machine learning using lime

Machine learning models are often dismissed on the grounds of lack of interpretability. There is a popular story about modern algorithms that goes as follows: Simple linear statistical models such as logistic regression yield to interpretable models, on the other hand, advanced models such as random forest or deep neural networks are black boxes. Meaning that it is nearly impossible to understand how a model is making a prediction.

In this talk I present a case study of the LIME model. LIME stands for Local Interpretable Model-agnostic Explanations, and its objective is to explain the result from any classifier so that a human can understand individual predictions.

Notebook

Slides of this talk