Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results about transferLearning_PPI experiments #20

Closed
hyp1231 opened this issue Apr 13, 2021 · 7 comments
Closed

Results about transferLearning_PPI experiments #20

hyp1231 opened this issue Apr 13, 2021 · 7 comments

Comments

@hyp1231
Copy link

hyp1231 commented Apr 13, 2021

Hi @yyou1996,

Thanks for the amazing work and released code. They are really interesting.

However, I find that it's hard to reproduce the results of Table. 5 on PPI dataset, where GraphCL gets 67.88 ± 0.85 ROC-AUC. I follow the instruction in README to run the script cd ./bio; ./finetune.sh in two versions of PyG.

The results and details of result.log of my reproducing experiments are as following:

ROC-AUC = 63.95 ± 1.05

In this experiment, I had torch_geometric == 1.0.3, torch == 1.0.1 and all the codes are as original.

0 0.6488228587091137 0.6437029455712953
1 0.6453625892746733 0.637906301310777
2 0.6653588896536515 0.6491496412192561
3 0.6712478235166237 0.6609652991518822
4 0.6551347790238357 0.648166151211857
5 0.6496970788807328 0.6285455615711776
6 0.6377575006466477 0.6259403911657462
7 0.6356370096455746 0.629560479239631
8 0.6442826009871101 0.6378061247726416
9 0.6421840662455275 0.6328729467542645

ROC-AUC = 63.43 ± 0.86

In this experiment, I had torch_geometric == 1.6.3, torch == 1.7.1, and I changed the codes a little bit following #14.

0 0.6442808014337911 0.6321503395158243
1 0.644493673207877 0.6267153949242326
2 0.6453095524067848 0.6349091275301133
3 0.6438850211437335 0.6420234098635493
4 0.6411963663200242 0.6328391671224951
5 0.66046061449877 0.6512654813973853
6 0.6473159630814896 0.6400854279154624
7 0.6308149754938717 0.6191641371576463
8 0.6486396922222195 0.6367294139925784
9 0.6357155854879776 0.6267330172938124

Results of my experiments are calculated according to the rightest column of result.log, which are results of test_acc_hard_list following Hu et.al. ICLR 2020 [1]. Just to make sure if I have missed some important details to reproduse the results presented in the literature. Looking forward to your reply, thanks!

[1] Strategies for Pre-training Graph Neural Networks. Weihua Hu, Bowen Liu et.al. ICLR 2020. arxiv.

@yyou1996
Copy link
Collaborator

yyou1996 commented Apr 13, 2021

Hi @hyp1231,

Below is my previous result log

0 0.6695203812316668 0.6639126131844449
1 0.6689575398079525 0.6578154819531342
2 0.688679302806011 0.6857136163297752
3 0.6836998338581497 0.6732908715258754
4 0.69713972798318 0.6849932255983389
5 0.6776616371025179 0.6742576016349859
6 0.678085934406664 0.6687597803949993
7 0.6714889003635183 0.6592118772987136
8 0.6725745492007931 0.6691230880465453
9 0.6807927650869883 0.669925179267206

CUDA version:

NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0

Also, I just put the environment file here.

Could it be environmental issue? Lets figure it out. Any update please let me know. Thanks!

@hyp1231
Copy link
Author

hyp1231 commented Apr 13, 2021

@yyou1996 Thank you for the fast and comprehensive reply. I'll try to upgrade the driver of my devices and do experiments as soon as possible.

@yyou1996
Copy link
Collaborator

Please also try lr=1e-2 and lr=1e-4 here. Due to potential environmental issue, a little bit of tuning is supposed to be required.

@hyp1231
Copy link
Author

hyp1231 commented Apr 14, 2021

Thanks for your advice!

When tuning lr=1e-4, results come to 67.34 ± 1.15, which is close to the results in the literature.

Details of result.log

0 0.6725159200324586 0.66603653940776
1 0.6657212107373389 0.6568356122380342
2 0.6687794432662423 0.6648277830180185
3 0.6985625261307801 0.68720311294303
4 0.6759798551079974 0.6692231385248839
5 0.6765493005622117 0.6736804953394853
6 0.7022692012839785 0.6951517668704265
7 0.6763894721533189 0.6615853768109135
8 0.6853784068489189 0.683059879327427
9 0.6850071688755396 0.6764286988851322

BTW, the experiments are carried out in NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1. Due to the difficulty of upgrading, I have not tested on CUDA 11.0 yet and still have no idea whether it's an environmental issue. If I find available devices for testing in the future, I'll update the results here.

All in all, thanks so much for the kindly reply and instruction. Just feel free to close this comment. :D

@yyou1996
Copy link
Collaborator

That's great! Good luck with your following experiments!

@ha-lins
Copy link

ha-lins commented Apr 14, 2021

Hi @yyou1996,

I wonder how to install torch=1.0.1 with cuda=11.0 or cuda=10.1. I found that the conda way only supports cuda 9.0 and 10.0. Thanks!

# CUDA 9.0
conda install pytorch==1.0.1 torchvision==0.2.2 cudatoolkit=9.0 -c pytorch

# CUDA 10.0
conda install pytorch==1.0.1 torchvision==0.2.2 cudatoolkit=10.0 -c pytorch

@yyou1996
Copy link
Collaborator

Hi @ha-lins,

I try torch.version.cuda then it outputs 9.0.176. I will defer this problem to others since I am not expert in environment config...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants