Equivalent Code for CNN using GCNConv #2312
Unanswered
hariharannatesh
asked this question in
Q&A
Replies: 1 comment 1 reply
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I attempted to write an equivalent code for CNN using the GCNConv package. Though conceptually I was able to write a code using GCNConv, it isn't the same as using the Conv2d package for the same input data.
To compare both, I take a 3x3 tensor generated by 2 Gaussian curves. I reshape the data to (1,3,3,1) and I pass through Conv2d which has a kernel size of (3,1), padding of (1,0) (bias=False). The number of i/p channels is 3 and the number of o/p channels is 2. The number of learnable parameters here would be 3x3x2 (kernel_size x i/p channels x o/p channels) i.e 18.
In the GCNConv code, I transposed the same 3x3 tensor. There are 3 nodes and there are 3 input features. The number of o/p features is 2. I define the edge index as ([[0,0,1,1,1,2,2] , [0,1,0,1,2,1,2]]) (under a given kernel in conv2d, the central node will be connected to nodes before and after it). As per the GCNConv package, the number of learnable parameters would be 3x2 (input_features x output_features) i.e 6 (bias=False).
How do I account for the factor of 3 (the kernel size which determines the number of different parameters per input channel in conv2d) in GCNConv? I am asking this because for larger data, the capacity of the network would be affected by this and that in turn would affect the accuracy.
Beta Was this translation helpful? Give feedback.
All reactions