Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VGG-like convnet from Keras examples fails on custom data #2092

Closed
aaronpolhamus opened this issue Mar 26, 2016 · 2 comments
Closed

VGG-like convnet from Keras examples fails on custom data #2092

aaronpolhamus opened this issue Mar 26, 2016 · 2 comments

Comments

@aaronpolhamus
Copy link

I'm trying to run the following network with Keras, powered by a TensorFlow backend. It's an adaptation of the "VGG-like" convnet from http://keras.io/examples/. I've linked a very small sample of the data to this issue report, and created scripts on Gist that can be used to reproduce the error (https://gist.github.com/aaronpolhamus/39c7a71151b8560d02dd).

Basically I've adapted the website example to accept an array of greyscale, single-channel images. Seems pretty straightforward, but when I attempt to train the model after compiling it I'm seeing the following error:

ValueError: Cannot feed value of shape (32, 128, 256) for Tensor u'Placeholder_89:0', which has shape '(?, 1, 128, 256)'

I'm puzzled: shouldn't it be no problem to generate 32 convolution filters with dim (32, 128, 256) from a series of input images with dimenstions (1, 128, 256)? It's likely that I'm still a novice with this stuff, but this error just isn't informative enough for me to tell what's going wrong or how to fix it...

Here's a link to sample data:
keras_ex_data.zip

Thanks in advance!

@aaronpolhamus
Copy link
Author

Solved it. In the script, the following line is needed to "reshape" the input array:

X_train = X_train.reshape(X_train.shape[0], 1, 128, 256)

Really, all we're doing here is adding a somewhat redundant channel dimension to make the shape of the array (8144, 1, 128, 256) instead of (8144, 128, 256). If we were using an RGB array this wouldn't be redundant at all, since it would be (8144, 3, 128, 256). Bottom line: my input array was missing the channel dimension, which I thought I could get away with omitting for greyscales. Turns out you still need to explicitly define the shape.

Great package. Once fixed the code should execute as-is.

@dongzhuoyao
Copy link

thank you for sharing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants