-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any way to convert an pytorch Tensor adjacency matrix into a pytorch_geometric Data object while allowing backprop? #1511
Comments
It is correct that you lose gradients that way. In order to backpropagate through sparse matrices, you need to compute both In code, this would look as following: edge_index = (adj > 0).nonzero().t()
row, col = edge_index
edge_weight = adj[row, col]
self.conv(x, edge_index, edge_weight) |
Using batch mode, would this be
I'm having trouble understanding the sparse storage mechanisms. |
Nearly: adj=torch.randint(0, 1, (num_graphs, n, n)))
offset, row, col = (batch.adj > 0).nonzero().t()
edge_weight = adj[offset, row, col]
row += offset * n
col += offset * n
edge_index = torch.stack([row, col], dim=0)
x = x.view(num_graphs * n, num_feats)
batch = torch.arange(0, num_graphs).view(-1, 1).repeat(1, n).view(-1) Here, we combine the node dimension and the batch dimension, so that separate graphs are represented as a "super graph". |
Apparently, not all models take |
Yes, not all models support |
Can I also use the following code to generate adjacent matrice in batch format? My goal is to build a simple GNN base GAN for the graph of the same number of nodes (90) with an MLP generator and a Graph-level classifier as discriminator. All my graphs are 90x90 symmetrical adjacency matrices with zero diagonal, so what I'm trying to do here is to generate only 4005 elements(half of a 90x90 matrix) and map them to the upper and lower triangle of another 90x90 torch tensor. Then I put a batch(n_samples_in_batch=16) of 90x90 tensor to a list and convert them into a batch. Does this look like a reasonable way to do it? Sorry for my clumsy codes.
|
Yes, this looks good to me. You might want to get rid of the for-loop iterating over each example in the mini-batch at one point in time though, as I assume this slows-down your code. |
I am interested in implementing an adversarial attack against at GATConv model I have created using the deep robust library. They use a separate dense adj matrix that they update and add to the original adj matrix to create the adversarial samples. Furthermore, I am using a NEIGHBOR_SAMPLER as a data loader for my training process. Is there |
(a) The |
Is there a way to convert an adjacency tensor produced by a MLP into a Data object while allowing backprop for a generative adversarial network? The generative adversarial network has an MLP generator with a pytorch_geometric based GNN as the discriminator I have not been able to find the answer to this question yet. Here is a simplified example of what the problem is.
Say I have this MLP generator:
So, this generator returns a vector representing a graph with two nodes, which we can reshape to form an adjacency matrix and a node feature vector.
Now to convert this to a pytorch_geometric Data object, we must construct a COO matrix (the x parameter in the Data object is already the node_features). However, if we loop through the adj matrix and add a connection to a COO matrix with the code below, back propagation does not work from the pytorch_geometric GNN to the pytorch MLP.
We can now construct the Data object like so:
However, when training a GAN by converting the generator output to a Data object for the GNN discriminator, back propagation and optimization does not work (I assume because the
grad_fn
andgrad
properties are lost. Does anyone know how to convert a tensor to a pytorch_geometric Data object while allowing back prop to happen in the generative adversarial network with MLP generator that outputs adj matrix/tensor and node features and GNN (pytorch_geometric based) discriminator that takes a Data object as input?The text was updated successfully, but these errors were encountered: