ChaiWithPy - the technical blog with a dash of tea
-
Updated
Sep 22, 2024 - CSS
ChaiWithPy - the technical blog with a dash of tea
Repository for the paper STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions (EMNLP 2024)
Python library for analyzing data quality and its impact on model performance across classification and object-detection tasks.
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"
This repository contains a console-interface name-ethnicity classifier
Multi-Calibration & Multi-Accuracy Boosting for R
Scan your AI/ML models for problems before you put them into production.
Implements a Delphi of GANs approach to detect bias in pre-trained language models and provides a novel way to assess and compare bias levels in different models across multiple dimensions such as gender, racial, and age bias
Study on the effect of masking the ROI in medical images to evaluate potential bias/shortcuts in datasets
All code related to the Try Before you Bias (TBYB) tool based on the paper: Quantifying Bias in Text-to-Image Generative Models. You can access a hosted TBYB web-service (and comprae evaluations to other users) via: https://huggingface.co/spaces/JVice/try-before-you-bias
Tools for diagnostics and assessment of (machine learning) models
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content, detecting manipulative techniques, and providing actionable feedback.
This repo contains my coding notebook for the tutorial series I made for the beginner level bias bounty challenge hosted by Humane Intelligence. I am an AI Ethics Fellow at Humane Intelligence.
A multi-view panorama of Data-Centric AI: Techniques, Tools, and Applications (ECAI Tutorial 2024)
Detecting & Mitigating Bias: Gender, Age & Site
tool to evaluate biases on the SinteticoXL Models
A project on bias detection in transformer-based LLMs, with a weakly supervised approach.
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Add a description, image, and links to the bias-detection topic page so that developers can more easily learn about it.
To associate your repository with the bias-detection topic, visit your repo's landing page and select "manage topics."