New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model diagrams for the GNN examples #556
Comments
@AlanSwift @hugochan can you please help? |
Currently, we don't provide the architecture graph about the specific applications. But we have visualized specific graph types such as dependency and etc. in our survey paper. |
There are some differences. |
@AlanSwift thanks for your response. But I can't find that much details in the document. I see at first you generate initial node embedding using word2ve or BERT. But your statement about the |
@AlanSwift also a bit confused here you said, we use separate attention:
But the example for the NMT is with a Thanks in advance for your help. |
Just an example: |
@AlanSwift this RNNencoder comes after the word2vec or bert embedding. The document states:
I do not understand why/how the BiLSTM encoder is used to encode the whole graph. Can you please explain? |
@AlanSwift this part is quite confusing. Why/how do you encode the whole graph with the BiLSTM? Also for the decoder pipeline, you mentioned:
Any other paper used this approach? Can you please provide me any reference paper? Also would appreciate if you provide me with pointers how is it done in the code? |
The word2vec, BiLSTM, Bert and etc. are used to initialize the node embedding, which can enrich contextural information. This trick is widely used in NLP&GNN research https://arxiv.org/pdf/1908.04942.pdf (only an example). For technique details, please refer to the implementations. |
@AlanSwift I not asking about As you see the document states:
As per the document:
I am asking about the step |
Considering the bidirectional sequential information is beneficial for most NLP tasks. |
@AlanSwift got it. But why the BiLSTM encoder to encode the whole graph? I would be thinking that is used to update the embedding for the node. Isn't it? Is the description incorrect? |
@AlanSwift the BiLSTM encoder is used to update the embedding for the node embedding. So it appears to me:
Now feed it to the GCN encoder. Is this understanding correct? |
@AlanSwift I understand bidirectional sequential information is beneficial for NLP tasks. But the BiLSTM encoder updates the initial So I am confused when you state that Would you please assist me with this question? |
Yes. It is correct. |
This issue will be closed. Feel free to reopen it if needed. |
❓ Questions and Help
This repo presents couple of nice examples for the GNN.
I am particularly interested about:
Do you have the model architecture described somewhere as part of the tutorial or documentation?
Alternately do you have a canonical architecture described somewhere for these Graph2Seq based models?
Is the model same as the Graph2Seq: A Generalized Seq2Seq Model for Graph Inputs?
The text was updated successfully, but these errors were encountered: