New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong test result on Cora #2
Comments
Hi, what do you mean that you "modified the shell file"? Could you share your versions of pytorch and pytorch geometric? Running |
So if you don't modify the shell
And actually If we just clone the repo without anymore modification, run
So I modified
And run Also, if I run in your required environment torch=1.7.1+cu110, torch-geometric=1.6.3, will get the errors:
|
Hi, can you try commenting out Non-Homophily-Large-Scale/dataset.py Line 336 in eb531f3
and runnning python main.py --dataset Cora --method gcn --num_layers 3 --hidden_channels 32 --lr 0.01 --rand_split --train_prop 0.48 --valid_prop 0.32 --runs 5 --weight_decay 5e-4 --no_bn ? This should be able to reproduce our GCN results in C.4.
|
I think by running on the command you provide, can get your GCN results. But my question is still your work gets wrong test results on the original GCN data splits.
Then the Cora dataset is split into train/valid/test = 140/500/1000, which is the original Cora data splits on semi-supervised task. |
I see, my fault I had some typos earlier. @llooFlashooll the reason you get low performance on the first run is that the first run uses a width 4 hidden layer. If you instead use a higher width, say width 64, then you will do better (I get 78% on the planetoid splits doing this). Playing around with the hyperparameters some more should get it up to 81% or higher. Also, you can use |
Really really thankful for your timely reply! In classical blogs like: https://pytorch-geometric.readthedocs.io/en/latest/notes/introduction.html#learning-methods-on-graphs, Paragraph Learning Methods on Graphs. Or https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py They build tiny network and can easily get up to 81% result. |
I get 81.50 +- .62 test accuracy using It seems to be the hyperparameter choices that give low performance, which shows the necessity of using the full hyperparameter grid to test these methods! |
With our test, very useful! Thanks for your guidance. |
When I modified the shell file for the Cora dataset, and ran the command:
bash experiments/gcn_exp.sh Cora
. The test results only get around 47~48. And we all know that GCN as a classical model will get results around at 81.I try hard to debug and modify the codes to get what’s wrong. But still be confused. Could you please give me an answer or a solution?
The text was updated successfully, but these errors were encountered: