Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code question #70

Open
linbeijianbaoxia opened this issue Dec 14, 2021 · 1 comment
Open

code question #70

linbeijianbaoxia opened this issue Dec 14, 2021 · 1 comment

Comments

@linbeijianbaoxia
Copy link

x = torch.cat([att(x, adj) for att in self.attentions], dim=1)
is this code get atention coefficients, I can't understand how it works =.=
Sorry, I'm a new person.

@defensetongxue
Copy link

I think in this line of code, x is processed by mutil-attention.
Each att(x,adj) calculate one head of attention score. And these heads are concated and sent to another layer whose output's dimension is equal to the kinds of label.
In this case the layer can be simular as the mlp layer in tradition attention algorithm.
You can search the "multi-attention" for more infomation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants