New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GINConv use example #65
Comments
Hi, what version of TensorFlow/Keras are you using? import numpy as np
import scipy.sparse as sp
from tensorflow.keras.layers import Input
from spektral.layers import GINConv
from spektral.layers import ops
A = sp.rand(10, 10)
A = ops.sp_matrix_to_sp_tensor(A)
X = np.random.randn(10, 5)
out = GINConv(300, activation='relu')([X, A]) Cheers |
Thank you @danielegrattarola, it seems that problem was connected with Keras version. After upgrade everything seems ok in your example. Could you just tell me how to preprocess label vector to disjoint mode? I'd like to connect GlobalSumPool layer to GINConv and next to fc one. After disjointing X and A matrices from (2200, 68, 68) and (2200, 68, 12) dimmensions I obtain (149600, 149600) (149600, 12) ones which produces the missmatch with y which is still 2200 length. How can I use "I" vector to resolve this problem? Thanks! |
I'm glad it works. If you have your disjoint graph it should be sufficient to pass X1 = GINConv(12, activation='relu')([X, A])
out = GlobalSumPool()([X1, I]) # Shape = (2200, 12) Cheers |
Thank you @danielegrattarola. Is it possible to use GINConv multiple times in this nomenclature? If so, what's the mechanism of pooling A and I elements? |
Yes, you can stack multiple layers. If you want to gradually reduce the size of the graph you can use "standard" pooling methods like MinCutPool or TopKPool, those will return a reduced X, A and I and you can apply GIN again afterwards. Cheers |
Thanks @danielegrattarola. I have an interesting observation to you. I was analyzing your disjoint type example about GraphConvSkip with TopKPool. In the train_step method I wanted to check some predictions by printining it on screen by each batch in training loop. To make it more clear in Fitting loop i placed something like this:
And in train_step method:
Could you tell me please why I'm getting this type of "print report":
It looks like not every batch calls the predict method, I'm not sure if this has a negative effect on the learning process of the model. When I'm printing predictions shape in evaluate method like this:
As a continuation of trening process I'm getting this kind of report:
Next epochs generates this:
Edit: I found it might be related to |
That is exactly correct. |
Thank you for your recent help @danielegrattarola ! I was analyzing your another example with GIN layer (https://github.com/danielegrattarola/spektral/blob/master/examples/graph_prediction/tud_disjoint.py) and want to ask some questions:
|
That's because I copy-pasted the code from the QM9 regression example and forgot to change the loss :D
Currently there are no methods based on the
Also, the |
Ok I understand, thanks. Cool that I was helpful in some case (loss). :) Have a great day! |
Hello @danielegrattarola, may you please deliver in examples the use of GINConv layer? I have a problem during passing a tensor (output of Keras "Input Layer") to this layer (model definition). Its connected with propagate method in Message Passing class:
Model Structure:
Error relation:
self.index_i = A.indices[:, 0]
Error Type:
TypeError: 'SparseTensor' object is not subscriptable.
The text was updated successfully, but these errors were encountered: