You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about Table 2 in your paper that all node classification accuracies are reported as acc (± sigma).
My question is that as "± sigma" is generated from different "torch manual seed", do we need to split the dataset differently according to different "seed"? Or in another word, when experimenting on certain method and dataset multiple times, does the dataset share the same train/val/test split?
Thank you very much!
The text was updated successfully, but these errors were encountered:
We investigate the stability of methods under different parameter initializations, thus all methods hold the same train/val/test split in Table 2. On the other hand, this experimental setting matches the standard data split in original papers of baselines.
By the way, I think that multiple random data splits or k-fold cross-validation may provide more sufficient evidence for model selection.
Hello!
I have a question about Table 2 in your paper that all node classification accuracies are reported as acc (± sigma).
My question is that as "± sigma" is generated from different "torch manual seed", do we need to split the dataset differently according to different "seed"? Or in another word, when experimenting on certain method and dataset multiple times, does the dataset share the same train/val/test split?
Thank you very much!
The text was updated successfully, but these errors were encountered: