DiffPool implementation #51
-
Hi all! I am working on graph classification and I realized I don't know how to properly implement the classic DiffPool method from Rex Ying et al (paper). I can see we have Pooling operation but the output is always a head with arity 0 (no variables) so I can not "create" these new nodes that are the result of pooling certain nodes in a previous level.
Any ideas? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hello, The issue I see with DiffPool is that graphs (clusters and edges) at each layer are learned via another GNN layer, and their structures change throughout the training. In PyNeuraLogic, you could "generate" another graph during the grounding/compiling time, but this generation wouldn't be learnable. |
Beta Was this translation helpful? Give feedback.
-
Hi, briefly looking at the paper, they probably don't really change the structure of the graph during learning. I might be wrong, but if I see correctly, the clustering is just soft (probabilistic), i.e. each node actually belongs to each cluster, just with different weights (and they probably just display the single assignment based on softmax output in the picture). Hence it's actually a fully connected graph...which translates to just a series of dense matrix multiplications. This should be possible to do, just like you would in a classic deep learning framework, but there is no logic in that, and emulating dense matrix operations with logic is generally not a good way to go...the benefit of this framework is with the complex sparse structures, not dense matrix multiplications...so I'm afraid there is zero benefit in trying to emulate this model in PyNeuraLogic... It could start being interesting if the second layer structure would somehow follow from the structural properties in the first layer, like in the various sub-graph or cellular GNNs...which have similar pictures, but different principles :) |
Beta Was this translation helpful? Give feedback.
Hi, briefly looking at the paper, they probably don't really change the structure of the graph during learning. I might be wrong, but if I see correctly, the clustering is just soft (probabilistic), i.e. each node actually belongs to each cluster, just with different weights (and they probably just display the single assignment based on softmax output in the picture).
Hence it's actually a fully connected graph...which translates to just a series of dense matrix multiplications. This should be possible to do, just like you would in a classic deep learning framework, but there is no logic in that, and emulating dense matrix operations with logic is generally not a good way to go...the bene…