You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, your work is very good and I appreciate it that I learned a lot from your paper and your code.
But when I reproduce your experiment using python main.py --cfg configs/GPS/ogbg-ppa-GPS.yaml wandb.use False as you instructed, the training statistics are very very strange, or rather, very low for many many epochs.
I checked some closed issues in this repository and found one was similar to mine (this one), where the data set related was code2, and here the issue is about ppa.
As for me, dependencies are as follows:
Here is part of my training process:
It kept very low for very long time (about 160 epochs), less than 0.1117, but abruptly went to 0.78 at epoch 165 (which is not abnormal any more)
I don't think package versions should matter. I'm sorry for bothering but could you just tell me why or reproduce the issue and figure out why? I've tried but still cannot find where is wrong.
The text was updated successfully, but these errors were encountered:
Hi, I had a look at the logs (you can find them here: https://github.com/rampasek/GraphGPS/blob/main/final-results.zip), and seems that it is the same behaviour as in my results. Possibly a better learning rate schedule could get rid of this "grokking" behaviour that is happening here consistently over 10 random seeds.
Hello, your work is very good and I appreciate it that I learned a lot from your paper and your code.
But when I reproduce your experiment using
python main.py --cfg configs/GPS/ogbg-ppa-GPS.yaml wandb.use False
as you instructed, the training statistics are very very strange, or rather, very low for many many epochs.I checked some closed issues in this repository and found one was similar to mine (this one), where the data set related was code2, and here the issue is about ppa.
As for me, dependencies are as follows:
Here is part of my training process:
It kept very low for very long time (about 160 epochs), less than 0.1117, but abruptly went to 0.78 at epoch 165 (which is not abnormal any more)
I don't think package versions should matter. I'm sorry for bothering but could you just tell me why or reproduce the issue and figure out why? I've tried but still cannot find where is wrong.
The text was updated successfully, but these errors were encountered: