Code for the paper "Challenges in Measuring Bias in Open-Ended Language Generation" presented as oral at the 4th Workshop on Gender Bias in NLP at NAACL 2022.
-
Updated
Apr 3, 2022 - Python
Code for the paper "Challenges in Measuring Bias in Open-Ended Language Generation" presented as oral at the 4th Workshop on Gender Bias in NLP at NAACL 2022.
Be your own journalist!
Debiasing word embeddings by linear projection of words uniformly in the embedding space + Evaluating metrics
Exposing Algorithmic Bias with Canonical Sets
BiasImpact - Get more insights about the news you read
Code for the paper 'Robust Pronoun Use Fidelity with English LLMs: Are they Reasoning, Repeating, or Just Biased?'
Can Chatbots Truly Be 'Unbiased'? Dissertation (UP847988)
Code for paper "Measuring Embedded Human-like Biases in Face Recognition Models"
A fairness library in PyTorch.
All code related to the Try Before you Bias (TBYB) tool based on the paper: Quantifying Bias in Text-to-Image Generative Models. You can access a hosted TBYB web-service (and comprae evaluations to other users) via: https://huggingface.co/spaces/JVice/try-before-you-bias
A bias bounty competition for income prediction. Using the pointer decision list method to improve group accuracy.
Assignment: Evaluating Gender Bias in BERT Context Embeddings
Source Code for User Bias Removal in Fine Grained Sentiment Analysis (CODS-COMAD 2018, DAB@CIKM 2017)
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.
Add a description, image, and links to the bias topic page so that developers can more easily learn about it.
To associate your repository with the bias topic, visit your repo's landing page and select "manage topics."