Time: 12:00 pm – 3:00 pm Pacific Time (Vancouver), February 3, 2021
Zoom Location: TBD
Online Resources:
Speakers:
The goal of this tutorial is to provide a systematic view of the current knowledge relating explainability to several key outstanding concerns regarding the quality of ML models;in particular, robustness, privacy, and fairness. We will discuss the ways in which explainability can inform questions about these aspects of model quality, and how methods for improving them that are emerging from recent research of AI, Security & Privacy, and Fairness communities can in turn lead to better outcomes for explainability. We aim to make these findings accessible to a general AI audience, including not only researchers who want to further engage with this direction, but also practitioners who stand to benefit from the results, and policy-makers who want to deepen their technical understanding of these important issues.
- Background of XAI Methods
- Evaluation Criteria for Model Explanation
- Explanations and Privacy
- Explanations and Fairness
- Explanations and Model Robustness
The target audience of this tutorial is researchers, practitioners, and policy-makers who are interested in the role that our topic plays in applications of AI. We expect audience members to be familiar the supervised learning, and have a working knowledge of how optimization methods are used to train models. We do not expect familiarity with the problems from privacy, fairness, or robustness.
Library containing attribution and interpretation methods for deep nets. To quickly play around with the TruLens library, check out the following CoLab notebooks:
More resources are available on our Github page: TruLens