Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to keep robust when still use the adjacency information implicitly #6

Open
alvinsun724 opened this issue Nov 24, 2021 · 2 comments

Comments

@alvinsun724
Copy link

Hi, your work is really inspiring and I have one question.

In the paper, you were saying the model would be more robust when facing large-scale graph data and corrupted adjacency information, as it utilizes the adjacency information implicitly, rather like GCN which uses adjacency information directly during the information aggregation phase.

However, I am wondering you still use the adjacency information (even multiply 4 times is possible: 4th power of adj) in calculation Ncontrast Loss, how would this maintain robust performance with massive corrupted adjacency information, given you still need the adjacency information in Ncontrast loss in training?

Is that becuase you only need adjacency information during training rather than both train and test phase? Or some other reason to justify?

I am really confused about that and look forward to your reply.

Thanks a lot

@yanghu819
Copy link
Owner

Hi,I think this issue helps: #5.

@alvinsun724
Copy link
Author

Hi,I think this issue helps: #5.

Thanks. Is my understanding correct that difference between the performance of the Graph-MLP and GCN, is mainly due to no adjacency information utilized in the test phase of Graph-MLP?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants