Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer Review #3 #3

Open
distillpub-reviewers opened this issue Jun 8, 2021 · 2 comments
Open

Peer Review #3 #3

distillpub-reviewers opened this issue Jun 8, 2021 · 2 comments

Comments

@distillpub-reviewers
Copy link
Collaborator

The following peer review was solicited as part of the Distill review process.

The reviewer chose to waive anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service they offer to the community.

Distill is grateful to Humza Iqbal for taking the time to review this article.


General Comments

Highly enjoyed the article! It was a great look into GNNs and various aspects of them as well as the problems they are used in. My favorite part was how thorough the article was in exploring the mechanics; diving into various aspects such as different pooling functions, how to batch them and so on. The diagrams were very fun to play around with, being able to manipulate the graphs and understand how they were effected by changing the different building blocks was very easy to see.

One thing that may be nice to add or at reference is this article which talks about the equivalence between Transformers and GNNs https://graphdeeplearning.github.io/post/transformers-are-gnns/. I thought about this when Transformers were mentioned in the article "This refers to the way text is represented in RNNs; other models, such as Transformers". I think an aside could be added in the section where Graph Attention Networks are mentioned.

It may also be good to point out that there is research going on in message passing to find the optimal way to get information to flow through. As an example, this paper https://arxiv.org/abs/2009.03717 deals with the issue of encoding global information well. On that note, it might be good to add a sentence talking about the limitations of message passing (ie if I increase my window size too much I risk my node representations converging and losing my ability to update)


Distill employs a reviewer worksheet as a help for reviewers.

The first three parts of this worksheet ask reviewers to rate a submission along certain dimensions on a scale from 1 to 5. While the scale meaning is consistently "higher is better", please read the explanations for our expectations for each score—we do not expect even exceptionally good papers to receive a perfect score in every category, and expect most papers to be around a 3 in most categories.

Any concerns or conflicts of interest that you are aware of?: No known conflicts of interest
What type of contributions does this article make?: Exposition on an emerging research direction

Advancing the Dialogue Score
How significant are these contributions? 4/5
Outstanding Communication Score
Article Structure 5/5
Writing Style 4/5
Diagram & Interface Style 4/5
Impact of diagrams / interfaces / tools for thought? 4/5
Readability 4/5

Comments on Readability

The diagrams were overall quite good. One nitpick I have is that for the diagram showing the difference between max, sum, and mean pooling it might be better to write "No pooling type can always distinguish between graph pairs such as max pooling on the left and sum / mean pooling on the right".

Some minor grammatical nitpicks:

  1. in the section on Graph Attention Networks the Latex doesn't seem formatted quite right for the phrase "( f(node_i, node_j))" perhaps there was some slight Latex error?

  2. the phrase "design design" appears in the section on "Learning Edge Representations" when I believe "design decision" was meant.

  3. In the section "GNN Playground" I believe ‘allyl alcohol’ and ‘depth’ were meant to be italicized

Scientific Correctness & Integrity Score
Are claims in the article well supported? 4/5
Does the article critically evaluate its limitations? How easily would a lay person understand them? 4/5
How easy would it be to replicate (or falsify) the results? 4/5
Does the article cite relevant work? 4/5
Does the article exhibit strong intellectual honesty and scientific hygiene? 3/5

Comments on Scientific Integrity

The article talks about the limitations involved in setting up GNNs and working with them (such as the tradeoffs between different aggregation functions) however it would have been nice to see some notes on how well GNNs work on various problems such as generative modeling or interpretability. I put the overall score for the limitations category at a 4 however if I were to break limitations down into how well particular limitations were explained and overall limitation coverage I would give each a score of 4 and 3 respectively.

@beangoben
Copy link
Collaborator

One thing that may be nice to add or at reference is this article which talks about the equivalence between Transformers and GNNs https://graphdeeplearning.github.io/post/transformers-are-gnns/. I thought about this when Transformers were mentioned in the article "This refers to the way text is represented in RNNs; other models, such as Transformers". I think an aside could be added in the section where Graph Attention Networks are mentioned.

We agreed and expanded this connection in the Graph Attention Networks subsection.

It may also be good to point out that there is research going on in message passing to find the optimal way to get information to flow through. As an example, this paper https://arxiv.org/abs/2009.03717 deals with the issue of encoding global information well. On that note, it might be good to add a sentence talking about the limitations of message passing (ie if I increase my window size too much I risk my node representations converging and losing my ability to update)

We agreed and expanded with a sub section "Some frontiers (and limitations) with GNNs" in the "Into the Wilds" section.

@beangoben
Copy link
Collaborator

beangoben commented Jul 27, 2021

We thank the reviewer for their time and attention, we have taken their comments into consideration and we think our work is stronger because of them.

Next, we summarize most of the changes that we have made based on feedback from all reviewers:

Reviewer 1 made several points on improving the writing and presentation of ideas, this resulted in simplifying the language for several sentences, breaking down paragraphs and expanding examples for some concepts.

Reviewer 1 also asked us to improve on the "lessons" of the GNN playground. These lessons became the subsection ""Some empirical GNN design lessons" which details new interactive visualizations that show some of the larger architecture trends for the playground.

Reviewer 3 made a point about expanding on the connection between Transformers and also on some of the current limitations with GNNs and message passing frameworks.

All reviewers noted a few typos, latex equations errors and grammatical mistakes that we have fixed. The bibliography has expanded slightly.

For a more detailed breakdown of the changes:

  • [Reviewer 1] Broke up the first paragraph and added more explicit examples.
  • [Reviewer 1] Changed ordering of appearance of graph attributes.
  • [Reviewer 1] Add a sentence to the first paragraph of "Graphs and where to find them" to better introduce and motivate the reader on why we look at images and text as graphs.
  • [Reviewer 1 & 3]For the "Text as graphs" section, we clarified the caption for the figure. Made a connection to Transformers.
  • [Reviewer 1] In the "Graph-valued data in the wild" section, for the other examples table we added a Domain column for each graph to denote the area of the dataset.
  • [Reviewer 1] When introducing graphs we have added an additional visualization that showcases embeddings for different graph attributes. Clarified writing when showing scalar values and that in practice we expect vector values.
  • [Reviewer 1] Added an aside to each example in the "Passing messages between parts of the graph" section.
  • [Reviewer 3] Expanded caption in figure for "comparing aggregation operations" section.
  • [Reviewer 3] Expanded the connection between transformers and GNNs in the Graph Attention Networks section.
  • [Reviewer 1 & 3] Added "Some empirical GNN design lessons", which has 5 interactive plots with an insight per plot.
  • [Reviewer 1] Added a section on more detailed notes on implementing graph convolutions.
  • [Reviewer 1 & 3] Expanded on some of the current limitations of GNNs and areas for improvement on modelling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants