Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualizing Effects from Previous Layers #67

Closed
xBorja042 opened this issue Jul 5, 2021 · 4 comments
Closed

Visualizing Effects from Previous Layers #67

xBorja042 opened this issue Jul 5, 2021 · 4 comments

Comments

@xBorja042
Copy link

Hello. I have already starred your package. It seems very usefull and accurate. I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one. This would be helpfull to understand what is learning each layer, e.g. layers with a lower number of filters tend to learn bigger features an viceversa. Does this make sense for you?

Thanks a lot and kind regards.
Borja

@keisen
Copy link
Owner

keisen commented Jul 5, 2021

Thank you to star the project!

I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one.

I assume that you want to know how to visualize other than the last convolutional layer with Gradcam , Gradcam++ or Scorecam. (If I misunderstand, please point it out.)
To do so, you can use penultimate_layer option of Gradcam#__call__() below.

penultimate_layer=None,

If you specify the name or index of the layer you want to visualize, the CAM corresponding to the layer will be generated.
Please see the API document below for details.

Thanks!

@xBorja042
Copy link
Author

Hello @keisen!

Yes, you understood me perfectly. And I have checked that your solution works well. Is it possible to do this in Saliency?
If not, why is that so?

Thanks a lot and kind regards,

Borja

@keisen
Copy link
Owner

keisen commented Jul 6, 2021

Is it possible to do this in Saliency? If not, why is that so?

Although both methods can locate the region of the arbitrary object in input image, the ways are different.

To visualize gradcam needs the output values of and the gradient with respect to the layer.
On the other hand, to visualize saliency map need the gradient with respect to model input.
That's, Saliency does NOT need the information about the intermediate layer.

Thanks!

@xBorja042
Copy link
Author

Thanks a lot! @keisen
I close this issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants