Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: SHAP Partition explainer fails for a single token text input #3515

Open
shreya-sri3009 opened this issue Feb 22, 2024 · 5 comments
Open
Labels
bug Indicates an unexpected problem or unintended behaviour

Comments

@shreya-sri3009
Copy link

Issue Description

I'm trying to implement SHAP partition explainer for text data using a custom tokeniser function. However when the input contains a text with single token for example "hello", it fails at the masker clustering step.

Minimal Reproducible Example

#Here data should be a single word

import shap

# this defines an explicit python function that takes a list of strings and outputs scores for each class
def f(x):
    tv = torch.tensor(
        [
            tokenizer.encode(v, padding="max_length", max_length=128, truncation=True)
            for v in x
        ]
    ).cuda()
    attention_mask = (tv != 0).type(torch.int64).cuda()
    outputs = model(tv, attention_mask=attention_mask)[0].detach().cpu().numpy()
    scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
    val = sp.special.logit(scores)

masker = shap.maskers.Text(custom_tokenizer)
explainer = shap.Explainer(f, masker, output_names=labels)
shap_values = explainer(data)

Traceback

File ".../pyenv/lib/python3.10/site-packages/shap/explainers/_partition.py", line 136, in __call__
    return super().__call__(
  File ".../pyenv/lib/python3.10/site-packages/shap/explainers/_explainer.py", line 266, in __call__
    row_result = self.explain_row(
  File ".../pyenv/lib/python3.10/site-packages/shap/explainers/_partition.py", line 169, in explain_row
    self._clustering = self.masker.clustering(*row_args)
  File ".../pyenv/lib/python3.10/site-packages/shap/maskers/_text.py", line 246, in clustering
    pt[:, 2] /= pt[:, 2].max()
  File ".../pyenv/lib/python3.10/site-packages/numpy/core/_methods.py", line 40, in _amax
    return umr_maximum(a, axis, None, out, keepdims, initial, where)
ValueError: zero-size array to reduction operation maximum which has no identity

I notice that the value of pt is [] - an empty array



### Expected Behavior

It is expected to work even for a single token text.

### Bug report checklist

- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest release](https://github.com/shap/shap/releases) of shap.
- [ ] I have confirmed this bug exists on the [master branch](https://github.com/shap/shap/blob/master/CONTRIBUTING.md#installing-from-the-master-branch) of shap.
- [ ] I'd be interested in making a PR to fix this bug

### Installed Versions

0.44.1
@shreya-sri3009 shreya-sri3009 added the bug Indicates an unexpected problem or unintended behaviour label Feb 22, 2024
@CloseChoice
Copy link
Collaborator

Thanks for the report. Would you be so kind and provide some sample data with which we can reproduce this?

@shreya-sri3009
Copy link
Author

Hey, you can use this dataset : https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset

@CloseChoice
Copy link
Collaborator

Thanks for pointing us to the dataset. I don't want to sound rude but we have so many issues that we really need to choose what we are working on, so for us it is best if we can reproduce a bug directly with the code provided in the bug description. If your time allows, it would be great if you could add loading and defining the dataset in your issue, so that we can reproduce it without looking up how to load data from kaggle + defining the data variable in the correct way.

@shreya-sri3009
Copy link
Author

You can use this code snippet :

# !pip install datasets
# !pip install shap

import datasets
import numpy as np
import pandas as pd
import scipy as sp
import torch
import transformers

import shap

# load the emotion dataset
dataset = datasets.load_dataset("emotion", split="train")
data = pd.DataFrame({"text": dataset["text"], "emotion": dataset["label"]})

# load the model and tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "nateraw/bert-base-uncased-emotion", use_fast=True
)
model = transformers.AutoModelForSequenceClassification.from_pretrained(
    "nateraw/bert-base-uncased-emotion"
).cuda()
labels = sorted(model.config.label2id, key=model.config.label2id.get)


# this defines an explicit python function that takes a list of strings and outputs scores for each class
def f(x):
    tv = torch.tensor(
        [
            tokenizer.encode(v, padding="max_length", max_length=128, truncation=True)
            for v in x
        ]
    ).cuda()
    attention_mask = (tv != 0).type(torch.int64).cuda()
    outputs = model(tv, attention_mask=attention_mask)[0].detach().cpu().numpy()
    scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
    val = sp.special.logit(scores)
    return val

method = "custom tokenizer"

# build an explainer by passing a transformers tokenizer
if method == "transformers tokenizer":
    explainer = shap.Explainer(f, tokenizer, output_names=labels)

# build an explainer by explicitly creating a masker
elif method == "default masker":
    masker = shap.maskers.Text(r"\W")  # this will create a basic whitespace tokenizer
    explainer = shap.Explainer(f, masker, output_names=labels)

# build a fully custom tokenizer
elif method == "custom tokenizer":
    import re

    def custom_tokenizer(s, return_offsets_mapping=True):
        """Custom tokenizers conform to a subset of the transformers API."""
        pos = 0
        offset_ranges = []
        input_ids = []
        for m in re.finditer(r"\W", s):
            start, end = m.span(0)
            offset_ranges.append((pos, start))
            input_ids.append(s[pos:start])
            pos = end
        if pos != len(s):
            offset_ranges.append((pos, len(s)))
            input_ids.append(s[pos:])
        out = {}
        out["input_ids"] = input_ids
        if return_offsets_mapping:
            out["offset_mapping"] = offset_ranges
        return out
    masker = shap.maskers.Text(custom_tokenizer)
    explainer = shap.Explainer(f, masker, output_names=labels)

test1 = pd.Series(["hi"])
test2 = pd.Series(["hi, how are you?"])
shap_values = explainer(test1)
shap_values = explainer(test2)

Error shown for test1 :

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-17-ffa091ba3e9d>](https://localhost:8080/#) in <cell line: 79>()
     77 test1 = pd.Series(["hi"])
     78 # test2 = pd.Series(["hi, how are you?"])
---> 79 shap_values = explainer(test1)
     80 # shap_values = explainer(test2)
     81 

4 frames
[/usr/local/lib/python3.10/dist-packages/numpy/core/_methods.py](https://localhost:8080/#) in _amax(a, axis, out, keepdims, initial, where)
     39 def _amax(a, axis=None, out=None, keepdims=False,
     40           initial=_NoValue, where=True):
---> 41     return umr_maximum(a, axis, None, out, keepdims, initial, where)
     42 
     43 def _amin(a, axis=None, out=None, keepdims=False,

ValueError: zero-size array to reduction operation maximum which has no identity

@shreya-sri3009
Copy link
Author

Hey @CloseChoice,
Any update on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behaviour
Projects
None yet
Development

No branches or pull requests

2 participants