Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the learning rate. #22

Open
johannwyh opened this issue May 25, 2021 · 1 comment
Open

A question about the learning rate. #22

johannwyh opened this issue May 25, 2021 · 1 comment

Comments

@johannwyh
Copy link

Hello!

I have a question about the learning rate. As is stated in the appendix of GraphSAGE, and are adopted by many other works, the learning rate is usually set to 1e-2, etc. Meanwhile, they usually normalize the input features.

However, in your work, the learning rate is set to 0.7, which is surprisingly high. You do not normalize the input features either. When I try to reset the learning rate to a common one and use the normalized features to train, I find that the model could only converge to a extremely bad performance.

This issue confuses me a lot. Could you help explain a bit?

@dongjinkun
Copy link

@johannwyh
Hello!
Have you solved this problem?

I am currently facing this issue as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants