Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inductive setting #3

Open
sungeun532 opened this issue May 26, 2022 · 2 comments
Open

Inductive setting #3

sungeun532 opened this issue May 26, 2022 · 2 comments

Comments

@sungeun532
Copy link

What is specific training process for inductive setting?
If new node (unseen in training stage) comes in the test stage, the adjacency matrix becomes different from the training stage, and LSI value of each node is also different, then wouldn't it be impossible to use the optimized MLP parameter based on the training graph structure??

@Doehong
Copy link

Doehong commented Dec 4, 2023

hi, I'm looking for a way to inductive experiment. did you get that? i have the same confuse

@zwt233
Copy link
Owner

zwt233 commented Dec 4, 2023

hi, I'm looking for a way to inductive experiment. did you get that? i have the same confuse

Hi, the process to get the LSI value of each node is non-parametric and training-free (can be treated as the data pre-prosessing step). And the main motivation of LSI is to get a good and node-adaptive smoothed features. The performance bottleneck in most graph datasets is the feature representation rather than the downstream model. Through our experiments, we find the performance influence of MLP parameters is very small, and you can even train an XGBoost model to get similar performance. So, we use the optimized MLP parameter based on the training graph structure in our experiments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants