-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot reproduce best results #3
Comments
Hi, thanks for your interest in our work. The experiments in the paper have been done using the Tensorflow implementation, link to which you can find in README.md. Regarding regularization: by default, the current version of the notebook uses a slightly different GCN implementation (called ImprovedGCN). This version doesn't use batch norm, and I empirically found it to work well with The architecture described in the paper (and the one used in the original TF implementation) is based on vanilla GCN with batch normalization. For that model you should use To be completely honest, I haven't thoroughly tested the Pytorch implementation in this repository - I just created it in the process of learning Pytorch. I hope that I haven't introduced any serious bugs in the process of migrating from TF. Please let me know if you still reproduce the results using the vanilla GCN, and then I will have a look into the code. |
After fixing the bug that you mentioned in #2 everything seems to work as expected. |
WOW, great! |
Hi, I am trying to reproduce the best results of mag_cs dataset which achieves 50.2 in the paper. I adopt the same settings and early_stop as mentioned in your paper, but my best result achieved is 46.0 and generally the NMI score is around 43;
Besides, the weight_decay value in interactive.ipynb is 1e-5 by default while in your paper the regularization strength lambda=1e-2(which resulted poor performance). Which one should I use to reproduce the best result of CS dataset?
any suggestions?
The text was updated successfully, but these errors were encountered: