Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about quantitative results. #1

Open
johannwyh opened this issue Jul 18, 2021 · 1 comment
Open

A question about quantitative results. #1

johannwyh opened this issue Jul 18, 2021 · 1 comment

Comments

@johannwyh
Copy link

Hello!

I have a question about Table 2 in your paper that all node classification accuracies are reported as acc (± sigma).

My question is that as "± sigma" is generated from different "torch manual seed", do we need to split the dataset differently according to different "seed"? Or in another word, when experimenting on certain method and dataset multiple times, does the dataset share the same train/val/test split?

Thank you very much!

@RuijiaW
Copy link
Collaborator

RuijiaW commented Jul 21, 2021

We investigate the stability of methods under different parameter initializations, thus all methods hold the same train/val/test split in Table 2. On the other hand, this experimental setting matches the standard data split in original papers of baselines.

By the way, I think that multiple random data splits or k-fold cross-validation may provide more sufficient evidence for model selection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants