Skip to content

cosmcbun/Explainable-Ai-Comps-2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Explainable-Ai-Comps-2024

GitHub repository for the Explainable AI Comps group 2024

Advised by Professor Anna Rafferty at Carleton College

Group Members

Thomas Pree, Adrian Boskovic, Chris Melville, Josh Moore, Sam Johnson-Lacoss, and Lev Shuster (all equal contributors)

Website

This project culminated in a website available here

Project Description

Link to official description

We are living through a revolution of the standards for ethical machine learning practices which has been thoroughly marked by the need to explain artificial intelligences — namely their predictions. As discussed in Ribeiro et al. (2016, p.1), the act of explaining an AI’s prediction presents the audience with visualizations pertaining to the actions it made to achieve such a decision, thus building users’ trust in the model and exposing any possible errors in the model’s structure. With regulations like the EU’s Right to Explainability and the United States’ proposed AI Bill of Rights, a machine learning model may no longer be a simple “black box”: in order to prevent criminal charges, the creators of high-impact models must be able to justify each of their predictions. Over the past few years, the field has thus become inundated with approaches, each a bid for its own niche. In such an impossibly dense field, how can one quantify a method’s efficacy? Which method would a jury trust?

Through this project, we will be exploring three major avenues for model explainability across two contrasting domains of machine-learning tasks (ResNet and MOOC). Namely, we will be applying Shapley, LIME, and Anchoring to two separate models of unique architecture which specialize in classification based on tabular and image data, respectively. This will culminate in a website which houses a comprehensive analysis of each method’s approach highlights and how they compare to the others, discussing the literature surrounding them and benchmarks of their performances, including a user study of Carleton College students of varying technical backgrounds. This study, inspired by Ribeiro et al. (2018), will ask users to predict alongside a model after being shown various amounts and methods of explanation. This paper, alongside these analyses serves to gauge the public’s perception of each method.