New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - Getting started notebook fails as plot_attributions
can't hande 3-dim explanations
#116
Comments
I had the same problem |
I found a strange phenomenon. For the same model, the same training sample and test sample, other operations are identical. Theoretically, the values obtained by using the XAI method (like Saliency) to evaluate the interpretability of the model should be the same. However, I retrained a new model, and the interpretability values obtained are completely different from those obtained from the previous model. Does anyone know why this happens? The interpretability value is completely unstable, and the results cannot be reproduced. Unless I completely save this model after training it, and then reload this parameter, the results will be the same. I have tested two types of prediction tasks: regression and classification. Secondly, I tested 1D-CNN, LSTM, 2D-CNN and other models, and found such problems. For example, I used the for loop to train 10 models under the same conditions. Then, The following command are used for 10 models: explainer = XAI(model) explanations = explainer(X_test, y_test) . The final explanations results will show that the interpretation results for each model are different. |
Now my understanding is that even though the prediction accuracy of models trained in the same sample and under the same conditions is almost the same, the difference in the weight parameters (such as neural network) of their own models leads to this result. I don't know if my understanding is correct. |
Hi @semihcanturk and @9527-ly ! :) |
Select the modules to which the bug refers:
Describe the bug
Some returned explanations are 4 dimensional (batch dim + hwc images), which does not work with the
plot_attributions
method inxplique/plots/image.py
as the method is suited to handle only 2D images with no channel dimension. The bug affects the following methods, which seem to return 4-dimensional explanations, which have explanations of shape(6, 224, 224, 3)
as opposed to(6, 224, 224)
:GradientInput, GuidedBackprop, IntegratedGradients, SmoothGrad, SquareGrad, VarGrad
Screenshots
Stack trace:
Desktop (please complete the following information):
Using default Google Collab notebook.
To Reproduce
Simply run all in
Getting_started.ipynb
Expected behavior
plot_attributions
should be able to handle images with multiple channels to produce the visualizations.Additional context
None
The text was updated successfully, but these errors were encountered: