Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the unsupervised_TU gsimclr.py #40

Closed
scottshufe opened this issue Dec 8, 2021 · 6 comments
Closed

Question about the unsupervised_TU gsimclr.py #40

scottshufe opened this issue Dec 8, 2021 · 6 comments

Comments

@scottshufe
Copy link

Hi GraphCL team, thanks for your excellent work.

I have some questions about the loss function in unsupervised_TU/gsimclr.py:
in your article eq.(3), you claim that the positive pairs and negative pairs are composed of augmentations, but in the code I find that you use the original sample and one augmentation to form a positive pair. I don't understand the differences between these two methods and wish you could give me some suggestions or explanation. Thank you in advance.

@yyou1996
Copy link
Collaborator

Hi @scottshufe,

Sorry for the late reply. Double augmentations are implemented for experiments except unsupervised_TU due to some implementation issue then (e.g. please refer to https://github.com/Shen-Lab/GraphCL/tree/master/semisupervised_TU).

@scottshufe
Copy link
Author

scottshufe commented Dec 22, 2021

Hi, Mr. You @yyou1996. Thanks for your reply. I think I have implemented the code of double augmentations, but I need to figure out the differences between single augmentation and double augmentations...If you have any ideas on this question, I would love to hear your opinion 😄

@CynthiaLaura6gf
Copy link

Thanks for your excellent codes, I have some problems with the loss function. what's the difference between the loss function in the class simclr and the loss function in the class of GcnInfomax.

@scottshufe
Copy link
Author

Thanks for your excellent codes, I have some problems with the loss function. what's the difference between the loss function in the class simclr and the loss function in the class of GcnInfomax.

I think the loss in GcnInfomax is the same as DGI (2019 ICLR) which generalizes DeepInfomax (2019 ICLR) from images to graphs. It aims to maximize the mutual information between local patches and the global graph. And GraphCL (or simclr) aims to maximize the mutual information between the augmentations.

@yyou1996
Copy link
Collaborator

@scottshufe I feel in small datasets it differs little, while things might change in large-scale datasets. The positive or negative influence depends on whether the augmentation is rational for the downstream.

@scottshufe
Copy link
Author

Got it. Thanks again, Mr. You. Maybe I should read more GCL articles and run some experiments to better understand these questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants