Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: shap interaction values of binary classifiers must add up to 0.0 for each observation and feature pair #12

Closed
mgelsm opened this issue Aug 14, 2020 · 1 comment · Fixed by #24
Assignees
Labels
bug Something isn't working

Comments

@mgelsm
Copy link
Contributor

mgelsm commented Aug 14, 2020

Describe the bug
When inspecting a binary classifier, the raw_shap_tensor of class 0 does not equal to -raw_shap_tensor of class 1.
It appears that the absolute difference can reach up to 10^-2.

Bug rises in function raw_shap_to_df

To Reproduce
Steps to reproduce the behavior:

  1. Go to the NHO facet modelisation and run the 4-Facet-modeling-NewAPI
  2. Try fitting the LearnerInspector instance
  3. See error:

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
Screenshot 2020-08-14 at 16 07 01

Desktop (please complete the following information):

  • OS: [e.g. iOS] IOS
  • Browser: Brave
@mgelsm mgelsm added the bug Something isn't working label Aug 14, 2020
@j-ittner j-ittner self-assigned this Aug 22, 2020
@j-ittner
Copy link
Member

j-ittner commented Aug 22, 2020

Thanks for spotting this!

By definition, both SHAP tensors obtained for a binary classifier should add up to 0.0 for each observation and feature.

Your example provides evidence that totals may deviate by as much as as 0.01 due to imprecisions in the SHAP explainer's approach for estimating SHAP values.

This is now addressed by PR #24.

The fix is not to raise an exception if the totals are not 0.0, but to log a warning instead, stating the range of observed totals. As long as these totals are small (e.g., less than 0.05, corresponding to 5%pt probability), it should be safe to ignore these warnings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
2 participants