Explaining the output of machine learning models with more accurately estimated Shapley values
-
Updated
Jun 7, 2024 - R
Explaining the output of machine learning models with more accurately estimated Shapley values
Fast approximate Shapley values in R
Break Down with interactions for local explanations (SHAP, BreakDown, iBreakDown)
An R package for computing asymmetric Shapley values to assess causality in any trained machine learning model
Examines fairness metrics for models including gender stereotyping versus group differences due to appropriate predictors. Also explores feature bias mitigation
Add a description, image, and links to the shapley topic page so that developers can more easily learn about it.
To associate your repository with the shapley topic, visit your repo's landing page and select "manage topics."