Machine learning algorithms are widely used in data science applications and have significant potential to improve predictions and understanding of social scientific processes. However machine learning models generally do not explain their predictions -- they simply seek to minimize some loss function and provide for a given observation the probability of an event occurring. In many applications researchers need to be able to explain why the model made one prediction over another. This emphasis on interpretability and explanation is directly relevant to many social scientific questions, and can provide necessary context for decision makers who need to use machine learning models but lack a strong technical background. In this workshop we introduce several techniques for interpreting black box models using model-agnostic techniques.
- Define interpretation and explanation, and their importance to machine learning
- Identify model-agnostic methods for creating interpretations/explanations
- Implement techniques for creating global model-agnostic explanations in R
- Implement techniques for creating local model-agnostic explanations in R
This workshop is designed for individuals with introductory-to-intermediate knowledge of machine learning algorithms, as well as experience training machine learning models using R. Prior experience with tidymodels
is helpful, but not required.
Room 295 in 1155 E 60th St.
- Register for this workshop. Due to the current public health crisis, all participants must register in advance using this form.
- Please sign up for a free RStudio Cloud account.
- Once you have created your RStudio Cloud account, join the workshop organization.
- Explanatory Model Analysis by Przemyslaw Biecek and Tomasz Burzykowski - a book written by the co-authors of
DALEX
which outlines the intuition and methodology of many of the interpretation/explanation methods we discuss in the workshop. Also includes code examples in R and Python. - Interpretable Machine Learning by Christoph Molnar - another textbook on interpretability and explanation in machine learning. Written by the author of
iml
, an alternative package for interpreting and explaining models in R using model-agnostic methods. - Maksymiuk, S., Gosiewska, A., & Biecek, P. (2020). Landscape of R packages for eXplainable Artificial Intelligence. arXiv preprint arXiv:2009.13248. - an exhaustive survey of all known R packages which implement eXplainable Artificial Intelligence (XAI).
- Explaining models and predictions, in Tidy Modeling with R by Max Kuhn and Julia Silge - a book chapter demonstrating how to integrate model explanations into the
tidymodels
workflow.