This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
-
Updated
Oct 13, 2023
This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
Build fair and safe Machine Learning models in Python
Repository containing sample datasets, models and notebooks to start using EXPAI.
A collection of notebooks to explore bias, fairness and explainability of machine learning models
Jupyter notebook simulating fairness metric results for race/ethnicity group for a process that depends on age only
This notebook represents my personal code, notes, and reflections for the Manning liveProject titled "Mitigate Machine Learning Bias: Shap and AIF360" by Michael McKenna. Any citations or references to original course material retain the original author copyright and ownership. Personal code is licensed under the MIT License.
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."