-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results about transferLearning_PPI experiments #20
Comments
Hi @hyp1231, Below is my previous result log
CUDA version:
Also, I just put the environment file here. Could it be environmental issue? Lets figure it out. Any update please let me know. Thanks! |
@yyou1996 Thank you for the fast and comprehensive reply. I'll try to upgrade the driver of my devices and do experiments as soon as possible. |
Please also try |
Thanks for your advice! When tuning Details of
BTW, the experiments are carried out in All in all, thanks so much for the kindly reply and instruction. Just feel free to close this comment. :D |
That's great! Good luck with your following experiments! |
Hi @yyou1996, I wonder how to install
|
Hi @ha-lins, I try |
Hi @yyou1996,
Thanks for the amazing work and released code. They are really interesting.
However, I find that it's hard to reproduce the results of Table. 5 on PPI dataset, where GraphCL gets 67.88 ± 0.85 ROC-AUC. I follow the instruction in README to run the script
cd ./bio; ./finetune.sh
in two versions of PyG.The results and details of
result.log
of my reproducing experiments are as following:ROC-AUC = 63.95 ± 1.05
In this experiment, I had
torch_geometric == 1.0.3, torch == 1.0.1
and all the codes are as original.ROC-AUC = 63.43 ± 0.86
In this experiment, I had
torch_geometric == 1.6.3, torch == 1.7.1
, and I changed the codes a little bit following #14.Results of my experiments are calculated according to the rightest column of
result.log
, which are results oftest_acc_hard_list
following Hu et.al. ICLR 2020 [1]. Just to make sure if I have missed some important details to reproduse the results presented in the literature. Looking forward to your reply, thanks![1] Strategies for Pre-training Graph Neural Networks. Weihua Hu, Bowen Liu et.al. ICLR 2020. arxiv.
The text was updated successfully, but these errors were encountered: