Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about linear evaluation #2

Closed
THINK2TRY opened this issue Jan 19, 2022 · 4 comments
Closed

Question about linear evaluation #2

THINK2TRY opened this issue Jan 19, 2022 · 4 comments

Comments

@THINK2TRY
Copy link

Generally, only the validation set is used to help select the best_model during the training. So I wonder, is it appropriate to use test_acc to help select the eval_acc?
image

@hengruizhang98
Copy link
Owner

Hi, thanks for your question. This might be inappropriate but actually it makes little difference to the final evaluation accuracy. Just make sure to use the same setting for all the comparing methods.

@THINK2TRY
Copy link
Author

THINK2TRY commented Jan 19, 2022

Thanks for your response.
I try to reproduce the results after the modification. The results in Cora and Citeseer make little difference, but the performance in PubMed actually decreases. (~0.8091 for 20 random initialization).

@hengruizhang98
Copy link
Owner

Thanks again for pointing out this problem. We will update this result in the future version if possible. And as you can see, our method still beat other methods on PubMed dataset. Feel free to use your reproducing results if you would like to use our method as the baseline in the future.

Best

@THINK2TRY
Copy link
Author

Thank you for your reply! And thanks for the excellent work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants