Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unfair comparison with other models. #1

Closed
hengruizhang98 opened this issue Apr 2, 2021 · 1 comment
Closed

Unfair comparison with other models. #1

hengruizhang98 opened this issue Apr 2, 2021 · 1 comment

Comments

@hengruizhang98
Copy link

In eval.py, train/test split follows a 90% / 10% mannner instead of that of public split. While the baseline models(e.g. DGI) use public split for evaluation.

@Linyxus
Copy link
Collaborator

Linyxus commented Apr 3, 2021

Hi, Hengrui. The metrics reported for baseline models were obtained under the identical protocols used for GRACE. Note that the results reported for DGI in our paper (e.g. 82.6 on Cora) is different from the ones in the original paper (e.g. 82.3 in Cora).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants