-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugs report #4
Comments
I have fixed this bug just now, you can try again. |
Thx! In addition I have some questions and suggestions.
However, when I run Q2: How is GraphCL loss implemented in Q3: what is I hope you can give a more detailed comment, thank you! S1: For S2: The installation of cortex-DIM is omitted in the required env yaml. Since |
Hi @flyingtango, Thanks for detailed feedback. I will try to double check things within this week. |
Hi @flyingtango, Q1. I fixed the bugs. Seems the dropping node ratio was incorrect previously which stands out in random2 (only sample from dropping nodes & subgraph) compared with in random4 --> value overflow --> nan values and gradients. Q2. @yongduosui can you give some comments on this? Q3. Would you mind referring the position of the code? Since the implementation is division of labour I would like to find the right person to address the question. S1. Yes that's right. Since we did experiments in a variety of settings, thus I might refer to the SOTA in each setting first (see the acknowledge part in each exp) --> then I start to implement our version --> thus the environment of each exp are separated. I am sorry that make the inconvenience. S2. Sorry for the mistake. It should exist in unsupervised_TU dir rather that semisupervised_TU dir. I already made it in the right place. |
@flyingtango [1] Petar Veliˇckovi´c, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. arXiv preprint arXiv:1809.10341, 2018. |
Hi @yyou1996, Thanks for your bug fix! I also met the Q1 previously. Now after the update, is the ratio of |
@ha-lins Yes the augmentation ratio is the dropping ratio rather than remained ratio. |
Hi Yuning,
There are some errors in the code.
When I run
mask
andedge
inunsupervised_Cora_Citeseer
usingpython -u execute.py --dataset citeseer --aug_type mask --drop_percent 0.20 --seed 39 --save_name cite_best_dgi.pkl --gpu 0
, there will have unassignment error as follows:Traceback (most recent call last): File "execute.py", line 189, in <module> sparse, None, None, None, aug_type=aug_type) File "/anaconda3/envs/graph/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/GraphCL/unsupervised_Cora_Citeseer/models/dgi.py", line 51, in forward ret1 = self.disc(c_1, h_0, h_2, samp_bias1, samp_bias2) UnboundLocalError: local variable 'c_1' referenced before assignment
Here are my env version: python==3.7, torch==1.5.0, torch-geometric==1.5.0. I hope you will notice and fix them.
The text was updated successfully, but these errors were encountered: