BUG: tensorflow DeepExplainer SHAP explanations do not sum up to the model's output #3612
Open
2 of 4 tasks
Labels
bug
Indicates an unexpected problem or unintended behaviour
Issue Description
The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of 0.039343 is significant compared the scale of your model outputs please post as a github issue, with a reproducable example if possible so we can debug it.
Minimal Reproducible Example
Traceback
Expected Behavior
shap_values = explainer_shap.shap_values(batch_x) -> OK;
When I use shap to explain the neural network I built, there will be ' DeepExplainer SHAP explanations do not sum up to the model 's output '. It will depend on the data and training results, and even the input batch _ x will affect whether this happens. This error does not necessarily occur, but every occurrence means that the network error, I want to know how to avoid this situation.
Bug report checklist
Installed Versions
shap.__version__0.39.0
tf.__version__2.3.4
np.__version__1.16.0
The text was updated successfully, but these errors were encountered: