Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: tensorflow DeepExplainer SHAP explanations do not sum up to the model's output #3612

Open
2 of 4 tasks
weiliyang opened this issue Apr 17, 2024 · 1 comment
Open
2 of 4 tasks
Labels
bug Indicates an unexpected problem or unintended behaviour

Comments

@weiliyang
Copy link

Issue Description

The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of 0.039343 is significant compared the scale of your model outputs please post as a github issue, with a reproducable example if possible so we can debug it.

Minimal Reproducible Example

def poisiton_seq():
    inputs = Input(shape=(time_steps, input_dim,))
    poisiton_seq = inputs[:,:,0:2]
    velocity_seq = inputs[:,:,2:4]
    #Q
    lstm_units = 128
    lstm1 = LSTM(lstm_units, return_sequences=True)(poisiton_seq)    
    lstm2 = LSTM(lstm_units, return_sequences=True)(lstm1)
    dense1 = Dense(128, activation='linear')(lstm2)
    #Key
    lstm_units = 128
    lstm3 = LSTM(lstm_units, return_sequences=True)(velocity_seq )    
    lstm4 = LSTM(lstm_units, return_sequences=True)(lstm3)
    dense2 = Dense(128, activation= 'linear')(lstm4)
    #Values
    lstm_units = 128
    lstm5 = LSTM(lstm_units, return_sequences=True)(dense1)  
    dense3 = Dense(128, activation='linear')(lstm5)


    scores = tf.matmul(dense1, dense2, transpose_b=True)    

    dk = tf.cast(tf.shape(dense2)[-1], tf.float32)
    scores = scores / tf.math.sqrt(dk)    

    attention_weights = tf.nn.softmax(scores, axis=-1) 

    
    qkv = tf.matmul(attention_weights,dense3)
    lstm6 = LSTM(lstm_units, return_sequences=False)(qkv)
    dense4 = Dense(1, 
                   activation='linear'
                  )(lstm6)  
    
    model = Model(inputs=inputs, outputs=dense4)#这个是tensorflow2.0
    return model


explainer_shap = shap.DeepExplainer(pretrained_model, batch_x) -> ok
shap_values = explainer_shap.shap_values(batch_x) -> error

Traceback

AssertionError                            Traceback (most recent call last)
<ipython-input-9-7e863d9e65f2> in <module>()
----> 1 shap_values = explainer.shap_values(x_train0[30:40])

e:\anaconda\lib\site-packages\shap\explainers\_deep\__init__.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
    122             were chosen as "top".
    123         """
--> 124         return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)

e:\anaconda\lib\site-packages\shap\explainers\_deep\deep_tf.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
    330                                                    "rounding error or because an operator in your computation graph was not fully supported. If " \
    331                                                    "the sum difference of %f is significant compared the scale of your model outputs please post " \
--> 332                                                    "as a github issue, with a reproducable example if possible so we can debug it." % np.abs(diffs).max()
    333 
    334         if not self.multi_output:

Expected Behavior

shap_values = explainer_shap.shap_values(batch_x) -> OK;
When I use shap to explain the neural network I built, there will be ' DeepExplainer SHAP explanations do not sum up to the model 's output '. It will depend on the data and training results, and even the input batch _ x will affect whether this happens. This error does not necessarily occur, but every occurrence means that the network error, I want to know how to avoid this situation.

Bug report checklist

  • I have checked that this issue has not already been reported.
  • I have confirmed this bug exists on the latest release of shap.
  • I have confirmed this bug exists on the master branch of shap.
  • I'd be interested in making a PR to fix this bug

Installed Versions

shap.__version__0.39.0
tf.__version__2.3.4
np.__version__1.16.0

@weiliyang weiliyang added the bug Indicates an unexpected problem or unintended behaviour label Apr 17, 2024
@weiliyang weiliyang changed the title BUG: tensorflpw DeepExplainer SHAP explanations do not sum up to the model's output BUG: tensorflow DeepExplainer SHAP explanations do not sum up to the model's output Apr 17, 2024
@CloseChoice
Copy link
Collaborator

CloseChoice commented Apr 18, 2024

This is a longstanding problems with LSTMs and well documented throughout our issue tracker. Currently there is no solution in sight for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behaviour
Projects
None yet
Development

No branches or pull requests

2 participants