You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and line 391 to: all_images.append(process_img(item, img_w, img_h))
The training crashes after the first epoch with: ValueError: Cannot reshape a tensor with 49152 elements to shape [32,128] (4096 elements) for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](discriminator/flatten/Reshape, Reshape/shape)' with input shapes: [32,1536], [2] and with input tensors computed as partial shapes: input[1] = [32,128].
Apparently, there is a problem here: real_first_output = tf.reshape(self.discriminator(images[i,...], training=True), (batch_size, disc_dim)). Somehow, the discriminator is not outputting correct dimension for reshaping. I have doubts that something is wrong with disc_dim but couldn't solve it.
I am using:
img_w = 256
img_h = 144
The text was updated successfully, but these errors were encountered:
The neural network's size is fixed. If you feed larger input images, also the output of the neural network is going to be larger. Then the shapes don't match anymore.
There are two options:
Manually changing the neural network by, for example, adding an additional layer, since the input images are now larger.
Resizing the input to 64x64
In general, any input size that isn't a square is going to be difficult because then the convolutional layers aren't symmetric anymore and it can be quite tedious to design them so that the shapes match everywhere.
When changing
img_size = 64
to count for images with different width and height i.e., also changing theprocess_img
to:and the definition of the discriminator to:
and line 391 to:
all_images.append(process_img(item, img_w, img_h))
The training crashes after the first epoch with:
ValueError: Cannot reshape a tensor with 49152 elements to shape [32,128] (4096 elements) for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](discriminator/flatten/Reshape, Reshape/shape)' with input shapes: [32,1536], [2] and with input tensors computed as partial shapes: input[1] = [32,128].
Apparently, there is a problem here:
real_first_output = tf.reshape(self.discriminator(images[i,...], training=True), (batch_size, disc_dim))
. Somehow, the discriminator is not outputting correct dimension for reshaping. I have doubts that something is wrong withdisc_dim
but couldn't solve it.I am using:
The text was updated successfully, but these errors were encountered: