-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weights in the Attention Layer? #1
Comments
No comment? |
I believe W_a and W_b are W_att and E_g, respectively, in the paper. See equations (4) and (5). https://jcheminf.biomedcentral.com/articles/10.1186/s13321-020-0414-z |
No in the Code 'att_w' refers to the W_att. and 'att_hidden' refers to E_g. The two weight matrices 'W_a' and 'W_b' are not referenced in the Paper. Maybe because it is assumed that people know that Weights are used in attention, but I feel like this should not be assumed in a Cheminformatics journal. |
Sorry, I got had by poor naming convention in the paper. In the code, the So this hasn't enlightened much - why have Why must
Why (1-3) are desirable, especially 3, I'm unsure. However, I'm fairly certain Cheers. |
I think Here It looks kind of like a skip-connection. I think its not intuitively understanable tho, why a skip connection is needed. |
Sorry for the late reply. I am busy with my graduation. I have read all your guy's comments. There is no conflict between our paper and source codes. Our paper is just a brief description of the codes. So, please refer to the source codes when you feel conflicted. The code here is optimized for our prediction tasks based on my experience and intuition. If you read our codes carefully, you should note we also use the skip-connection in the message passing steps. Of course, you can change or choose different attention algorithms as there are several variants published by others. |
Hi Thank for the clarification. Good luck with your graduation. |
The weight matrices
W_a
andW_b
in the attention layer are not mentioned in the paper, but present in the code. I am not so familiar with attention layers, but I was wondering if I missed something in the paper or it is common knowledge to apply those weights in an self attention layer?The text was updated successfully, but these errors were encountered: