Skip to content

ACM FAT*2020 Tutorial

Vijay Arya edited this page Jan 25, 2020 · 19 revisions

AI Explainability 360

Date: Monday, January 27, 2020

13:00-14:30: Tutorial Session 1
14:30-15:00: Coffee break
15:00-16:30: Tutorial Session 2

Location: Barcelona, Spain at the Barceló Sants hotel, Room MR7.

Presenters: Vijay Arya, Amit Dhurandhar, Dennis Wei

ACM FAT* Tutorials [link].
ACM FAT* Program Schedule [link].

Tutorial slides


This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models.

Our aim is to help educate multiple audiences, including data scientists working in different application domains, social scientists, domain experts, as well as machine learning researchers. To this end, we will present an overview of AI explainability, common terminology, an interactive web demo, and detailed Jupyter notebooks covering use cases in different domains.

A major motivation for creating AIX360 is that there are many ways to explain: data vs. model, direct vs. post-hoc, local vs. global. We will present a taxonomy to help practitioners navigate the explainability space. The toolkit itself includes eight state-of-the-art algorithms covering different modes of explanation along with proxy explainability metrics, and the interactive web demo will feature three of these algorithms.


  • Introduction to AI explainability and AIX360
  • Background and glossary
  • Interactive web demo
  • Taxonomy for choosing explanation algorithms
  • Jupyter notebook examples
    • Consumer lending
    • Health and nutrition
    • Medical images (dermoscopy)
  • Future directions


The tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, the interactive web demo will serve as a grounded introduction to concepts and capabilities. Through the explainability taxonomy, we will teach all participants which type of explanation method is most appropriate for a given use case, which is beneficial regardless of technical background. The three Jupyter notebook examples in different application domains will allow data scientists and developers to gain hands-on experience with the toolkit, while others will be able to follow along by viewing rendered versions of the notebooks.

Instructions for Attendees

  1. Please join the Slack channel dedicated to this tutorial. It contains important information to aid you with this hands-on tutorial, including the installation guide we will be using.

    1. Join the AIX360 Slack channel: Instructions
    2. Subscribe to the #fat-tutorial-2020 channel. You can do this by clicking on "Channels" in Slack and searching for "fat-tutorial-2020".
  2. Please bring your laptop as this is a hands-on tutorial.

  3. Please install the anaconda python distribution and AIX360 library ahead of time by following these instructions:

    1. Install anaconda python distribution by following instructions here: (Python 3.x version)
    2. Create a python virtual env, clone AIX360 github repository, and install it by following instructions here:
    3. Run a jupyter notebook server locally on your machine by following instructions here:
    4. Register and download FICO and ISIC datasets:
      • FICO Dataset
      • ISIC Dataset(-> "Participate in this phase" -> login (e.g. using your google/github account) -> Download training and ground truth data zip files)
    5. Try running the following jupyter notebooks locally on your machine:


Clone this wiki locally
You can’t perform that action at this time.