Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OGB Hyperparameters #12

Open
Timob12 opened this issue Jan 29, 2022 · 3 comments
Open

OGB Hyperparameters #12

Timob12 opened this issue Jan 29, 2022 · 3 comments

Comments

@Timob12
Copy link

Timob12 commented Jan 29, 2022

Hello,
are there any specific hyperparameters one should use when trying to reproduce the OGB node classification results? I extended the approach from the cora, citeseer, and pubmed dataset to the OGB-arxiv dataset but only achieve ~50% accuracy and not 70% as stated in the paper. This is especially weird because the cora, citeseer, and pubmed results are all agreeing with the ones in the paper. Also, your link to the preprocessed files is not working ("https://drive.google.com/drive/u/0/my-drive").

@allenhaozhu
Copy link
Owner

Sorry, can you share with me how do you train ogb-arxiv? Usually, I will use ssgc to get embedding and then train a mlp as a classifier. That's why I provide two different files

@Timob12
Copy link
Author

Timob12 commented Feb 2, 2022

Sorry, I made a mistake when multiplying the matrices. It is working now and I get similar results. However, the link to the preprocessed files in you document classification directory is still not accessable.

@allenhaozhu
Copy link
Owner

Sorry, I made a mistake when multiplying the matrices. It is working now and I get similar results. However, the link to the preprocessed files in you document classification directory is still not accessable.

I guess you need to generate intermediate files by yourself because these files are too big. I follow the guideline from TextGCN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants