Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray) #162

Closed
herman-nside opened this issue Jan 16, 2021 · 4 comments

Comments

@herman-nside
Copy link

I'm having trouble running my Spektral model. I am aware of this issue which might have something to do with Windows paths, but I am on a Mac, and I think this is a different issue.

My code is as follows:

import spektral
import tensorflow as tf
from dataset_class import GNN_Dataset
from spektral.data.dataset import Dataset
from spektral.layers import GraphSageConv
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l1_l2

class MyFirstGNN(Model):

    def __init__(self):
        super().__init__()
        self.X_1 = GraphSageConv(512,
                                 activation='relu',
                                 kernel_regularizer=l1_l2(l1=0.01, l2=0.01),
                                 bias_regularizer=l1_l2(l1=0.01, l2=0.01),
                                 kernel_initializer=tf.keras.initializers.GlorotUniform(),
                                 )

        self.last = Dense(14, activation='softmax')  # (X_4)

    def call(self, inputs):
        out = self.X_1()(inputs)
        out = self.last(out)
        return out

dataset = GNN_Dataset('/dataset/sn/')
batch_loader = spektral.data.loaders.BatchLoader(dataset, batch_size=2)
model = MyFirstGNN()
model.compile(optimizer='Adam', loss='binary_cross_entropy')
model.fit(batch_loader.load(), steps_per_epoch=batch_loader.steps_per_epoch, epochs=10)

I get the following error:

test.py 48 <module>
model.fit(batch_loader.load(), steps_per_epoch=batch_loader.steps_per_epoch, epochs=10)

training.py 108 _method_wrapper
return method(self, *args, **kwargs)

training.py 1049 fit
data_handler = data_adapter.DataHandler(

data_adapter.py 1105 __init__
self._adapter = adapter_cls(

data_adapter.py 788 __init__
peek = _process_tensorlike(peek)

data_adapter.py 1021 _process_tensorlike
inputs = nest.map_structure(_convert_numpy_and_scipy, inputs)

nest.py 635 map_structure
structure[0], [func(*x) for x in entries],

nest.py 635 <listcomp>
structure[0], [func(*x) for x in entries],

data_adapter.py 1016 _convert_numpy_and_scipy
return ops.convert_to_tensor(x, dtype=dtype)

ops.py 1499 convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)

tensor_conversion_registry.py 52 _default_conversion_function
return constant_op.constant(value, dtype, name=name)

constant_op.py 263 constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,

constant_op.py 275 _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)

constant_op.py 300 _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)

constant_op.py 98 convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)

ValueError:
Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).

My dataset looks like this:

dataset[0]
Out[2]: Graph(n_nodes=9, n_node_features=36, n_edge_features=None, n_labels=9)

Inside batch_loader, it looks like this:

for i in batch_loader:
    print(i)
    break

    
((array([[[0., 1., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 1., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],
       [[0., 1., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 1., 0., ..., 0., 0., 0.],
        [0., 1., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.]]]), array([[[0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0],
        [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
        [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
       [[0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0],
        [0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
        [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]])), array([array([0., 1., 1., 0., 0., 1., 1., 0., 0., 1., 0., 1., 0., 0.]),
       array([1., 1., 1., 0., 0., 1., 1., 0., 1., 1., 0., 1., 1., 1., 0.])],
      dtype=object))

My first thought is that this error is not Spektral, but something I'm doing wrong. Is there any way to avoid it or is there a way to cast it (either A or y or both) as a tensor so before fitting?

@danielegrattarola
Copy link
Owner

Hi,

the first entry in y has 14 entries while the second entry has 15.
BatchLoader only supports graph-level labels (meaning that labels do not get zero-padded -- that would not make sense) so all labels should have the same shape.

In this specific case, the error is because the Loader tries to create a jagged array and Numpy automatically converts that array to an object dtype, which is not supported by tensorflow.
The solution is to create labels that have all the same shape.

I also see another problem with your code: GraphSage does not support batch mode.
It seems that you're trying to do node-level prediction anyway, so BatchLoader will not be usable in your case. You should try to convert your code yo use a DisjointLoader (and set node_level=True when you instantiate the loader).
This should solve your current problem AND the one you would have after solving the current one.

Cheers,
Daniele

@herman-nside
Copy link
Author

herman-nside commented Jan 19, 2021

Thank you very much! Yes, most of these problems stem from me changing from the Disjoint loader to the Batch loader. My concern with the disjoint loader is that I have networks of different sizes. Having a fixed step-size over the resultant disjoint matrix will cut off some matrices halfway, and in other cycles will combine more than one matrix (of more than one graph). Is that a valid concern? Is there something else I can do to mitigate this effect?

@danielegrattarola
Copy link
Owner

It would be a valid concern if you first create a big disjoint matrix and then cut it up to create mini-batches, but that is the "magic" of the DisjointLoader. It will create a disjoint graph on the fly for each mini-batch, so there should be no problem.

You only need to change this line:

batch_loader = spektral.data.loaders.BatchLoader(dataset, batch_size=2)

to

disjoint_loader = spektral.data.loaders.DisjointLoader(dataset, batch_size=2, node_level=True)

Cheers

@herman-nside
Copy link
Author

Thank you very much! I understand that better now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants