You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think your implementation in semisupervised_TU is correct, so may I ask why there are different implementations, why not just use a same set of augmentation implementation and just import them in different tasks?
Thanks for pointing it out. I double check the original messy code for unsupervised_TU/aug.py and found that you are right, it is not entire same implementation as semisupervised_TU. Since unsupervised_TU was performed much more earlier than others for proof of concept experiments, the implementation way might somehow differ, leading to some inconsistence, while I think the rest semisupervised_TU, transfer_learning and adversarial_robustness experiments should be consistent. Another thing is that you can try to replace line 343 with line 342 in https://github.com/Shen-Lab/GraphCL/blob/master/unsupervised_TU/aug.py and this should make the same implementation, and I feel the results should be similar
Dear Authors,
The permute_edges in unsupervised_TU/aug.py is not adding the edges, the "idx_add" variable is not used.
GraphCL/unsupervised_TU/aug.py
Line 330 in aedd92a
I think your implementation in semisupervised_TU is correct, so may I ask why there are different implementations, why not just use a same set of augmentation implementation and just import them in different tasks?
GraphCL/semisupervised_TU/pre-training/tu_dataset.py
Line 211 in aedd92a
The text was updated successfully, but these errors were encountered: