Skip to content
Permalink
main
Switch branches/tags
Go to file
 
 
Cannot retrieve contributors at this time

Overview

Unintended bias is a major challenge for machine learning systems. In this tutorial, we will demonstrate a way to measure unintended bias in a text classification model using a large set of online comments which have been labeled for toxicity and identity references. We will provide participants with starter code that builds and evaluates a machine learning model, written using open source Python libraries. Using this code they can explore different ways to measure and visualize model bias. At the end of this tutorial, participants should walk away with new techniques for bias measurement.

Prerequisites

Participants should have a basic knowledge of Python, and should bring a laptop in order to experiment with the Python notebook.

Links

Our interactive Python notebook is available on Colab. To use this notebook, click connect in the top right corner, then use SHIFT + ↲ to run a cell.

The slides for this tutorial are made available on Github.