Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About symmetrically normalization of adjacency matrix #55

Closed
empty-id opened this issue Jan 12, 2020 · 1 comment
Closed

About symmetrically normalization of adjacency matrix #55

empty-id opened this issue Jan 12, 2020 · 1 comment

Comments

@empty-id
Copy link

empty-id commented Jan 12, 2020

Hi, I notice that in this PyTorch version code, the adjacency matrix is row-normalized instead of symmetrically normalized. However, the accuracy (82.5%) is higher than the TensorFlow version code (81.6%). Moreover, I also tried to symmetrically normalize the adjacency matrix in this PyTorch version, but the result dropped (to 79.9%). Nevertheless, result of TensorFlow version does not change after modification of normalization. For summarization, this is the experiments I did:

Cora dataset TensorFlow PyTorch
Symmetrically Normalization 81.6 79.9
Row Normalization 81.6 82.5

Is there any idea why does this happen?

@empty-id
Copy link
Author

empty-id commented Jan 12, 2020

I find some differences between pytorch and tf version.

  • In layers.py, pytorch version should use glorot initialization for weights and zero initialization for bias
  • In models.py, pytorch doesn't perform dropout (or sparse_dropout) for the input x first.

After I modified the above two points, the result now is ~82% for symmetrically normalization in pytorch version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant