You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After trying out Pytorch Captum for explaining my trained text classifier, as described here, I am looking into of doing the same with SHAP as in this branch.
I read this tutorial, but I am running into some issues.
This is the stack trace:
AttributeError Traceback (most recent call last)
<ipython-input-23-eda1d2a70059> in <module>
----> 1 shap_values = explainer(sentence)
~/.conda/envs/python38/lib/python3.8/site-packages/shap/explainers/_explainer.py in __call__(self, max_evals, main_effects, error_bounds, batch_size, outputs, silent, *args, **kwargs)
194 feature_names = [[] for _ in range(len(args))]
195 for row_args in show_progress(zip(*args), num_rows, self.__class__.__name__+" explainer", silent):
--> 196 row_result = self.explain_row(
197 *row_args, max_evals=max_evals, main_effects=main_effects, error_bounds=error_bounds,
198 batch_size=batch_size, silent=silent, **kwargs
~/.conda/envs/python38/lib/python3.8/site-packages/shap/explainers/_partition.py in explain_row(self, max_evals, main_effects, error_bounds, batch_size, silent, *row_args)
430
431 # build a masked version of the model for the current input sample
--> 432 fm = MaskedModel(self.model, self.masker, self.link, *row_args)
433
434 if max_evals == "auto":
~/.conda/envs/python38/lib/python3.8/site-packages/shap/utils/_masked_model.py in __init__(self, model, masker, link, *args)
23 # if the masker supports it, save what positions vary from the background
24 if callable(getattr(self.masker, "invariants", None)):
---> 25 self._variants = ~self.masker.invariants(*args)
26 self._variants_column_sums = self._variants.sum(0)
27 self._variants_row_inds = [
~/.conda/envs/python38/lib/python3.8/site-packages/shap/maskers/_text.py in invariants(self, s)
154 self._update_s_cache(s)
155
--> 156 invariants = np.zeros(len(self._tokenized_s), dtype=np.bool)
157 if self.keep_prefix > 0:
158 invariants[:self.keep_prefix] = True
AttributeError: 'Text' object has no attribute '_tokenized_s'
Any pointers on what is going wrong?
My first indication is that this regards the transformers-based tokenizer that I use.
The text was updated successfully, but these errors were encountered:
After trying out Pytorch Captum for explaining my trained text classifier, as described here, I am looking into of doing the same with SHAP as in this branch.
I read this tutorial, but I am running into some issues.
This is the stack trace:
Any pointers on what is going wrong?
My first indication is that this regards the transformers-based tokenizer that I use.
The text was updated successfully, but these errors were encountered: