You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @mengliu1998 ,
Thank you for your interest in our work!
I liked your "Non-local GNNs" work, by the way.
(1) Yes. I don't have the exact numbers, but if I remember correctly, at r=5, it was close to 100% for GAT, and something like 92% for the other GNNs. Sorry I don't have the exact results, but you can easily reproduce them using the flag --last_layer_fully_adjacent. The purpose of that section in the paper was to convince the reader that the problem actually exists, before suggesting a solution.
(2) Yes, the reason is just practical: because of its MLP in every layer, GIN took longer to train and required more GPU memory than the other GNNs. We could see that the general trend goes below 0.2 for r>6, so we eventually gave up running it for r=7,8. You can reproduce this as well, it just takes a lot of time to run.
I just noticed your reply. Sorry for the late response.
Thank you for your feedback to these questions, which solves my concerns very well.
Thank you for liking our Non-local GNNs. Yes, I have read this recent GATV2 paper. The analysis of "static" and "dynamic" attention is really insightful. Congratulations!
Hi Uri,
Thank you for this amazing and insightful work. I have the following two questions about the experiment. Hope I can get your help.
(1) Did you run experiments for GNNs+FA on the NeighborsMatch dataset? If yes, could you share the results?
(2) I didn't find the results for GIN on the NeighborsMatch dataset when r=7/8. Is there any reason about this?
Best,
Meng
The text was updated successfully, but these errors were encountered: