You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
x = torch.cat([att(x, adj) for att in self.attentions], dim=1)
is this code get atention coefficients, I can't understand how it works =.=
Sorry, I'm a new person.
The text was updated successfully, but these errors were encountered:
I think in this line of code, x is processed by mutil-attention.
Each att(x,adj) calculate one head of attention score. And these heads are concated and sent to another layer whose output's dimension is equal to the kinds of label.
In this case the layer can be simular as the mlp layer in tradition attention algorithm.
You can search the "multi-attention" for more infomation.
x = torch.cat([att(x, adj) for att in self.attentions], dim=1)
is this code get atention coefficients, I can't understand how it works =.=
Sorry, I'm a new person.
The text was updated successfully, but these errors were encountered: