Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce results on pokec and snap-patents #1

Closed
ShichangZh opened this issue Nov 13, 2021 · 2 comments
Closed

Reproduce results on pokec and snap-patents #1

ShichangZh opened this issue Nov 13, 2021 · 2 comments

Comments

@ShichangZh
Copy link

Hello,

Thank you so much for putting these datasets together for public access. Very interesting and well-written paper as well!

I have encountered some issues when I try to reproduce the results on the pokec and snap-patents dataset. For the simplest GCN model, my results on these two datasets are ~62% and ~41% respectively. However, the accuracy in the paper are ~75% and ~45%. For both cases, I used hidden_dim = 32 and searched over lr = [0.1, 0.01, 0.001]. May I ask what hyperparameters I should use to achieve the desired accuracy shown in the paper? Also, after how many epochs did your training converge?

In the paper appendix B1, it says that the best results were also searched over hidden_dim = [4,8,16,32]. However, my training accuracy is also similar to the validation/test accuracy, so I am not sure reducing hidden_dim will help. Also, since these two datasets are large, running the hyperparameter search again be expensive. Could you please kindly share the exact hyperparameters you used?

By the way, my results are on the first fixed split. My other guess is that maybe the 5 fixed splits are very different from each other so the averaged result can be high when the other splits produce high accuracies? However, if this is the case, the variance of these 5 splits may seem to be too high. It would be great if you could also confirm the accuracy of 5 splits should be similar.

I really appreciate your help.

@cptq
Copy link
Collaborator

cptq commented Nov 14, 2021

Hello,
Thanks for the comments, I'm glad you find our work useful! I am not too sure what your issue could be. Just now I tried some manually selected hyperparameters, and I was basically able to reproduce the performance (75.39% on pokec, 44.99 on snap-patents) on my first try. Here are the commands:

python main.py --dataset pokec --method gcn --num_layers 2 --hidden_channels 32 --lr 0.01 --display_step 5 --runs 1

python main.py --dataset snap-patents --method gcn --num_layers 2 --hidden_channels 32 --lr 0.01 --display_step 5 --runs 1 --directed

Could you try those and tell me if they work? I don't think there is that much variance across splits; our results table shows small standard deviations, and also the splits were generated by sampling nodes uniformly at random, which should not have that much variance in these large graphs.

@ShichangZh
Copy link
Author

I managed to get similar results using your hyperparameters. However, I had to add self-loops to each node before l put the graph into the model. I assume adding self-loops is legitimate? Otherwise, it won't work for my implementation. My results for snap-patents even got higher than the paper results. Anyway, thank you for your response, and thank you for contributing this interesting benchmark!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants