-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAP around 0.4 after 300 epochs, far less than results in your paper #22
Comments
I had the same experience.. I changed some code for pytorch 0.4.0, but the result from mammal closure had only 0.4 MAP. Someone who got high MAP help me? |
The current implementation is written for PyTorch 0.3.0. We are working on an update for the new version of PyTorch, but haven't had time to push it out yet. In the meantime, just use PyTorch 0.3.0 ( |
I get similar results with pytorch 0.3.0 if I split the data into training and test sets (80% training, 20% test), so I don't think the version of pytorch is to blame. However, if I train on the whole dataset and then test on it as well, I get numbers that are close to those reported in the paper. Are those the numbers you report? Is that your "reconstruction" setting? It seems odd to me that you would test on the training set though... |
Correct, the reconstruction setting is simply how well does your embedding recover the original input graph. There is no need to split the dataset into train/test sets. |
Thanks for the quick response, so does that mean that the reported numbers (0.927 MAP) are the reconstruction numbers? |
Correct |
Hi, I just train your code on pytorch 0.4.0, on the mammals dataset, with your default hyperparameters, I can just get an embedding of MAP around 0.4 after 300 epochs, which is far more less than your reported results 0.927 in your paper. BTW, I just add 6 lines of code in the form of t= int(t) because the version of pytorch, other codes remain the same. I am wondering how could this happen?
The text was updated successfully, but these errors were encountered: