-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
attn_grad #3
Comments
Hi @betterze, thanks for your kind words and for your interest in our work! Let me try to clarify, but if my answer isn't good enough please let me know: Moving to the gradients, this line: Thanks :) |
Dear Hila, Thank you very much for your detail answer. It is really helpful. After a few experiments, I believe I understand it now. Thank you again for your help, I really appreciate it. Best Wishes, Alex |
Dear Hila,
Thank you for your work, I really like it.
In clip nootbook,
'''
image_attn_blocks = list(dict(model.visual.transformer.resblocks.named_children()).values())
'''
then
'''
grad = blk.attn_grad
cam = blk.attn_probs
'''
If I understand correctly, each blk is a clip ResidualAttentionBlock. But there is not attn_grad or attn_probs in ResidualAttentionBlock class, they are inherit from nn.Module? I try to google it, but I can not find related resource.
Similarly, in ViT nootbook, there are
'''
grad = blk.attn.get_attn_gradients()
cam = blk.attn.get_attention_map()
'''
The function is from here, but I still have trouble to understand how you get the gradient and attention map. Sorry, I am new to torch.
Could you help me to understand your implementation?
Thank you for your help.
Best Wishes,
Alex
The text was updated successfully, but these errors were encountered: