New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory on Pubmed Dataset #3
Comments
Did you reduce the feature size to 256, as the paper reports? |
Sorry for being stupid; I didn't change it to be 256. After I change the batch size to 256, the result I get on Pubmed is Again, thank you for sharing the codes! It would also be very nice if you can share the implementation of DGI on Reddit (in Tensorflow or something; whatever works). |
Is your result single-run, or averaged over multiple runs? I'm not ruling anything out, but it could always be due to PyTorch versions. |
This is averaged over 10 runs. And I am using pytorch 1.0. |
You could've gotten lucky -- try 50, as described in the paper. But yeah, it could well be PyTorch version, the one I've used for the experiments is In either case, thanks for taking the effort to re-run this experiment and notifying of the outcome! |
Great! I am closing this. |
I tried to run the released
execute.py
on Pubmed. However, it seems that it takes 19.25 GB during back propagation.Is this the correct behavior? Is there any solution to bypass this problem and replicate the paper reported number?
The text was updated successfully, but these errors were encountered: