Skip to content

Conversation

@vijaydwivedi75
Copy link
Member

@vijaydwivedi75 vijaydwivedi75 commented Jan 22, 2021

This PR fixes the computation of positional encodings (PEs) for the full graph attention experiments, shown in the main paper (Table 1, Column 'Full Graph').

Due to the bug, for the full graph experiments, the PEs were computed on the fully connected (fully adjacent) graphs, and not the original sparse graphs.
With the correction, the PEs are calculated always on the original sparse graphs, which is the objective for PEs to capture original graph structure (hence positions as well) and inject them into the nodes.

--

Ps. Note that the full graph attention is not what the paper finds best for a graph transformer architecture, and this bug fix does not change the paper's main results, analysis and conclusion. The updated Table 1 will be on arxiv's next version of the paper.

Thanks to @Saro00 for pointing this out.

@vijaydwivedi75 vijaydwivedi75 merged commit 3c83b4b into main Jan 22, 2021
@vijaydwivedi75 vijaydwivedi75 deleted the pe-full-graph branch January 22, 2021 04:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants