You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the architecture I want to implement I need layers that work on inputs of different sizes while sharing weights.
The following code snippet from the guide to the functional API (The concept of layer 'node') achieves that for Convolution2D and works fine:
fromkeras.layersimportInput, Convolution2Da=Input(shape=(32, 32, 3))
b=Input(shape=(64, 64, 3))
conv=Convolution2D(16, 3, 3, border_mode='same')
conved_a=conv(a)
# only one input so far, the following will work:assertconv.input_shape== (None, 32, 32, 3)
conved_b=conv(b)
# now the `.input_shape` property wouldn't work, but this does:assertconv.get_input_shape_at(0) == (None, 32, 32, 3)
assertconv.get_input_shape_at(1) == (None, 64, 64, 3)
However, running the equivalent code for Convolution3D results in a ValueError because the input shape expected by the layer seems to be fixed to the first shape it saw.
fromkeras.layersimportInput, Convolution3Da=Input(shape=(32, 32, 32, 3))
b=Input(shape=(64, 64, 64, 3))
conv=Convolution3D(16, 3, 3, 3, border_mode='same')
conved_a=conv(a)
# only one input so far, the following will work:assertconv.input_shape== (None, 32, 32, 32, 3)
conved_b=conv(b)
ValueError: Input 0 is incompatible with layer convolution3d_1: expected shape=(None, 32, 32, 32, 3), found shape=(None, 64, 64, 64, 3)
I'm using the tensorflow backend (and dim_ordering) on gpu.
Any ideas what might cause this inconsistency between Convolution2D and 3D?
The text was updated successfully, but these errors were encountered:
gvtulder
added a commit
to gvtulder/keras
that referenced
this issue
Jan 25, 2017
@lheinric: Convolution3D fixes the input shape when the layer is built, which happens during the first call. This input shape is useful for some optimisations. Convolution2D does not fix the input shape. The patch in the pull request should fix your problem.
For the architecture I want to implement I need layers that work on inputs of different sizes while sharing weights.
The following code snippet from the guide to the functional API (The concept of layer 'node') achieves that for Convolution2D and works fine:
However, running the equivalent code for Convolution3D results in a ValueError because the input shape expected by the layer seems to be fixed to the first shape it saw.
I'm using the tensorflow backend (and dim_ordering) on gpu.
Any ideas what might cause this inconsistency between Convolution2D and 3D?
The text was updated successfully, but these errors were encountered: