Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about HSN tutorial #130

Open
georg-bn opened this issue Jun 25, 2023 · 1 comment
Open

Questions about HSN tutorial #130

georg-bn opened this issue Jun 25, 2023 · 1 comment

Comments

@georg-bn
Copy link
Contributor

I have a couple of questions/found bugs regarding the HSN tutorial (and hence might impact other tutorials in the simplicial domain).

  1. " self.layers = layers\n",
    should be self.layers = torch.nn.ModuleList(layers), so that the parameters get properly registered.
  2. " return torch.softmax(logits, dim=-1)"
    should probably not have softmax, as later binary crossentropy on logits is used:
    " loss = torch.nn.functional.binary_cross_entropy_with_logits(\n",
  3. "Since our task will be node classification, we must retrieve an input signal on the nodes. The signal will have shape $n_\\text{nodes} \\times$ in_channels, where in_channels is the dimension of each cell's feature. Here, we have in_channels = channels_nodes $ = 34$. This is because the Karate dataset encodes the identity of each of the 34 nodes as a one hot encoder."
    "Here, we have in_channels = channels_nodes $ = 34$. This is because the Karate dataset encodes the identity of each of the 34 nodes as a one hot encoder." This seems to be incorrect as we get 2 dim features:
    "There are 34 nodes with features of dimension 2.\n"
    and they are eigenvectors from the graph as defined in https://github.com/pyt-team/TopoNetX/blob/4c47ec24047a7af83d5a249a79c1945e7043ceea/toponetx/datasets/graph.py#L38 .
@georg-bn
Copy link
Contributor Author

Some more things about the training loop.

  1. If as suggested above in 2., the softmax is removed, then the checks y_hat > 0.5 need to be replaced by y_hat > 0. Also for using binary_cross_entropy_with_logits it is probably most convenient to let the model output a 1D vector of logits instead of 2D as is done currently.
  2. There's a crucial typo where y_pred[-len(y_train) :] should instead be y_pred[:len(y_train)] here:
    " accuracy = (y_pred[-len(y_train) :] == y_train).all(dim=1).float().mean().item()\n",

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant