Skip to content

Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.

Notifications You must be signed in to change notification settings

jolares/ai-ethics-fairness-and-bias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Example AI Ethics Fairness Practices

TODO: Link to Blog Post Workshop

References

  • IBM's AI Fairness 360: This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

  • Google's What-If-Tool: Using WIT, you can test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data, and for different ML fairness metrics.

  • Georgia Institute of Technology's CS 6603: AI, Ethics, and Society.

  • UC Berkley's Algorithmic Fairness & Opacity Lecture Series.

About

Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages