New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaling to larger datasets #8
Comments
Thanks for your interest on our work! Actually we use 8 GPUs in parallel. That being said, we use one GPU for one dataset. For the support of multiple GPUs, I think it is fairly easy to adopt existing libraries 😄 |
Thanks for your kind reply. |
That's a good question. Theoretically, every hidden dimension could be a tunable hyperparameter, so doubling the size of hidden vectors is acceptable in my opinion. I think for a fair comparison with other models, you need to make sure the encoder part is the same. In case of the performance drop as you mentioned, it may be attributed to relatively small size of the dataset. |
Thanks for your awesome work! I am trying to apply GRACE to larger datasets, but according to your code, the training process is conducted in a full-batch way which hinders the scalability. In your paper, it is mentioned that EIGHT GPUs are used, could you please kindly share the way you implement it? As far as I know, PyG only supports multi-graph distributed computation. Also, it deserves many thanks if you could provide me with other suggestions! Looking forward to your reply!!
The text was updated successfully, but these errors were encountered: