You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is almost similar to #3502 but for pytorch models.
I currently have a model (nn.Sequential) containing a nn.SELU activation function
Currently the pytorch DeepExplainer does not support model with SeLU activation functions.
A warning and an AssertionError is raised when trying to run explainer.shap_values()
~/miniconda3/envs/ml-modules/lib/python3.10/site-packages/shap/explainers/_deep/deep_pytorch.py:243: UserWarning: unrecognized nn.Module: SELU
warnings.warn(f'unrecognized nn.Module: {module_type}')
{
"name": "AssertionError",
"message": "The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of %f is significant compared to the scale of your model outputs, please post as a github issue, with a reproducible example so we can debug it. Used framework: pytorch - Max. diff: 8.348439283898188 - Tolerance: 0.01",
"stack": "---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[5], line 1
----> 1 shp_values_100 = deep_explainer_100.shap_values(tens_100)
File ~/miniconda3/envs/ml-modules/lib/python3.10/site-packages/shap/explainers/_deep/__init__.py:135, in DeepExplainer.shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
91 def shap_values(self, X, ranked_outputs=None, output_rank_order='max', check_additivity=True):
92 \"\"\"Return approximate SHAP values for the model applied to the data given by X.
93
94 Parameters
(...)
133
134 \"\"\"
--> 135 return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)
File ~/miniconda3/envs/ml-modules/lib/python3.10/site-packages/shap/explainers/_deep/deep_pytorch.py:214, in PyTorchDeep.shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
211 with torch.no_grad():
212 model_output_values = self.model(*X)
--> 214 _check_additivity(self, model_output_values.cpu(), output_phis)
216 if isinstance(output_phis, list):
217 # in this case we have multiple inputs and potentially multiple outputs
218 if isinstance(output_phis[0], list):
File ~/miniconda3/envs/ml-modules/lib/python3.10/site-packages/shap/explainers/_deep/deep_utils.py:20, in _check_additivity(explainer, model_output_values, output_phis)
16 diffs -= output_phis[t][i].sum(axis=tuple(range(1, output_phis[t][i].ndim)))
18 maxdiff = np.abs(diffs).max()
---> 20 assert maxdiff < TOLERANCE, \"The SHAP explanations do not sum up to the model's output! This is either because of a \" \\
21 \"rounding error or because an operator in your computation graph was not fully supported. If \" \\
22 \"the sum difference of %f is significant compared to the scale of your model outputs, please post \" \\
23 f\"as a github issue, with a reproducible example so we can debug it. Used framework: {explainer.framework} - Max. diff: {maxdiff} - Tolerance: {TOLERANCE}\"
AssertionError: The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of %f is significant compared to the scale of your model outputs, please post as a github issue, with a reproducible example so we can debug it. Used framework: pytorch - Max. diff: 8.348439283898188 - Tolerance: 0.01"
}
Alternative Solutions
I managed a work around after adding the op_handler['SELU'] = nonlinear_1d into shap/explainers/_deep/deep_pytorch.py:386
op_handler['LeakyReLU'] =nonlinear_1dop_handler['ReLU'] =nonlinear_1dop_handler['ELU'] =nonlinear_1dop_handler['Sigmoid'] =nonlinear_1dop_handler["Tanh"] =nonlinear_1dop_handler["Softplus"] =nonlinear_1dop_handler['Softmax'] =nonlinear_1dop_handler['SELU'] =nonlinear_1d# New SELU op_handler
After doing this everything works fine for my code. But I don't know if it breaks anything yet or if it is the correct way of fixing it. There are also other non-linear activation functions in newer version of pytorch listed in https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity and some of them aren't supported or listed in the op_handler code above.
Problem Description
This issue is almost similar to #3502 but for pytorch models.
I currently have a model (nn.Sequential) containing a nn.SELU activation function
Currently the pytorch DeepExplainer does not support model with SeLU activation functions.
A warning and an
AssertionError
is raised when trying to runexplainer.shap_values()
Alternative Solutions
I managed a work around after adding the
op_handler['SELU'] = nonlinear_1d
intoshap/explainers/_deep/deep_pytorch.py:386
After doing this everything works fine for my code. But I don't know if it breaks anything yet or if it is the correct way of fixing it. There are also other non-linear activation functions in newer version of pytorch listed in https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity and some of them aren't supported or listed in the op_handler code above.
Additional Context
Relevant and Similar Issues:
#3504
#3502
Example Code:
Model Arch:
Feature request checklist
The text was updated successfully, but these errors were encountered: