Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Getting started notebook fails as plot_attributions can't hande 3-dim explanations #116

Closed
1 of 5 tasks
semihcanturk opened this issue Oct 31, 2022 · 4 comments
Closed
1 of 5 tasks
Assignees

Comments

@semihcanturk
Copy link

semihcanturk commented Oct 31, 2022

Select the modules to which the bug refers:

  • Attributions Methods
  • Feature Visualization
  • Concepts
  • Metrics
  • Documentation

Describe the bug
Some returned explanations are 4 dimensional (batch dim + hwc images), which does not work with the plot_attributions method in xplique/plots/image.py as the method is suited to handle only 2D images with no channel dimension. The bug affects the following methods, which seem to return 4-dimensional explanations, which have explanations of shape (6, 224, 224, 3) as opposed to (6, 224, 224):
GradientInput, GuidedBackprop, IntegratedGradients, SmoothGrad, SquareGrad, VarGrad

Screenshots
Stack trace:

[/usr/local/lib/python3.7/dist-packages/xplique/plots/image.py](https://localhost:8080/#) in plot_attributions(explanations, images, cmap, alpha, clip_percentile, absolute_value, cols, img_size, **plot_kwargs)
    154     rows = ceil(len(explanations) / cols)
    155     # get width and height of our images
--> 156     l_width, l_height = explanations.shape[1:]
    157 
    158     # define the figure margin, width, height in inch

ValueError: too many values to unpack (expected 2)

Desktop (please complete the following information):
Using default Google Collab notebook.

To Reproduce
Simply run all in Getting_started.ipynb

Expected behavior
plot_attributions should be able to handle images with multiple channels to produce the visualizations.

Additional context
None

@9527-ly
Copy link

9527-ly commented Nov 1, 2022

I had the same problem

@9527-ly
Copy link

9527-ly commented Nov 1, 2022

I found a strange phenomenon. For the same model, the same training sample and test sample, other operations are identical. Theoretically, the values obtained by using the XAI method (like Saliency) to evaluate the interpretability of the model should be the same. However, I retrained a new model, and the interpretability values obtained are completely different from those obtained from the previous model. Does anyone know why this happens? The interpretability value is completely unstable, and the results cannot be reproduced. Unless I completely save this model after training it, and then reload this parameter, the results will be the same.

I have tested two types of prediction tasks: regression and classification. Secondly, I tested 1D-CNN, LSTM, 2D-CNN and other models, and found such problems. For example, I used the for loop to train 10 models under the same conditions. Then, The following command are used for 10 models: explainer = XAI(model) explanations = explainer(X_test, y_test) . The final explanations results will show that the interpretation results for each model are different.

@9527-ly
Copy link

9527-ly commented Nov 1, 2022

Now my understanding is that even though the prediction accuracy of models trained in the same sample and under the same conditions is almost the same, the difference in the weight parameters (such as neural network) of their own models leads to this result. I don't know if my understanding is correct.

@fel-thomas
Copy link
Member

fel-thomas commented Nov 2, 2022

Hi @semihcanturk and @9527-ly ! :)
thank you very much for opening this issue. I proposed a PR (#117) and the new version of Xplique should be available in the day (France). Don't hesitate if you have other problems I'm still available (here is my email in case it's urgent: thomas_fel@brown.edu)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants