-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualizing Effects from Previous Layers #67
Comments
Thank you to star the project!
I assume that you want to know how to visualize other than the last convolutional layer with Gradcam , Gradcam++ or Scorecam. (If I misunderstand, please point it out.) tf-keras-vis/tf_keras_vis/gradcam.py Line 29 in c493e4c
If you specify the name or index of the layer you want to visualize, the CAM corresponding to the layer will be generated. Thanks! |
Hello @keisen! Yes, you understood me perfectly. And I have checked that your solution works well. Is it possible to do this in Saliency? Thanks a lot and kind regards, Borja |
Although both methods can locate the region of the arbitrary object in input image, the ways are different. To visualize gradcam needs the output values of and the gradient with respect to the layer. Thanks! |
Thanks a lot! @keisen |
Hello. I have already starred your package. It seems very usefull and accurate. I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one. This would be helpfull to understand what is learning each layer, e.g. layers with a lower number of filters tend to learn bigger features an viceversa. Does this make sense for you?
Thanks a lot and kind regards.
Borja
The text was updated successfully, but these errors were encountered: