Skip to content

This git repo contains the all of the code required to run and reproduce the results from the paper BLIND: Bias Removal With No Demographics (ACL 23). For questions, write to orgad.hadas@cs.technion.ac.il.

technion-cs-nlp/BLIND

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BLIND

This repo contains the code for Debiasing NLP Models Without Demographic Information , which appeared in ACL 2023. The repo contains code to train models on the two tasks presented in the paper - Bios (occupation classification) and Moji (sentiment classification), to test the models with all of the fairness metrics from the paper (up to 10 different metrics!), and to probe the models for any demographic information (sec 5.2 in the paper).

For questions, write to Hadas Orgad.

*** For each one of the scripts, there are many different posslbe configuration parameters, and you can run the script with --help to see the different options.

*** The scripts run weights & biases for logging training process and test results, so make sure you first configure your wandb account.

Requirements

Install requirements by

pip install requirements.txt

Bias in bios (occupation classification)

Dataset

You will need to get the dataset following the instructions from the original paper, Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. (Or write to me - as of the time of writing these lines I have a copy of the dataset).

Change dir to bios to run the following scripts.

Training

A. extract tokens

Before training, you need to extract the tokens from the dataset with extract_tokens.py:

You should have the following file: data/biosbias/BIOS.pkl

And run (example): python extract_tokens.py --type raw --model bert-base-uncased

for without finetuning - need extract vectors. This will also be used to extract vectors for probing.

B. running the training script

Training is done with train.py. Example:

python train.py --model bert-base-uncased --data raw --seed 0

This will train without any debiasing algorithm.

To run with DFL loss (with demographics):

python train.py --model bert-base-uncased --data raw --seed 0 --no_group_labels --dfl_gamma 16 --temp 1

To run BLIND, just add --no_group_labels:

python train.py --model bert-base-uncased --data raw --seed 0 --dfl_gamma 16 --temp 1 --no_group_labels

Testing

Example for testing a BLIND mode:

python test.py --model bert-base-uncased --data raw --seed 0 --dfl_gamma 16 --temp 1 --no_group_labels --split test

Moji (sentiment classification)

Dataset

You'll need to first get the dataset from ELazar and Goldberg and have it at ../data/moji/sentiment_race/, for instance: ../data/moji/sentiment_race/neg_neg.

Change dir to "moji" to run the following scripts.

Training

A. extract tokens

To finetune, you will first need to extract tokens to a file, for example:

python extract_tokens.py --model bert-base-uncased

B. running the training script

Example for running BLIND:

python train.py --model bert-base-uncased --data raw --seed 0 --dfl_gamma 16 --temp 1 --no_group_labels

Testing

Example for testing a BLIND mode:

python test.py --model bert-base-uncased --data raw --seed 0 --dfl_gamma 16 --temp 1 --no_group_labels --split test

About

This git repo contains the all of the code required to run and reproduce the results from the paper BLIND: Bias Removal With No Demographics (ACL 23). For questions, write to orgad.hadas@cs.technion.ac.il.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages