-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[help wanted] training a conv2d net with data of variable sizes #8769
Comments
Hi! I think I can help you. When using neural networks, if you want your model to handle multiple images sizes, the layers used in the network must be able to handle variables size images. But Dense layers need to know the exact shapes of the tensor just before to know the shape of the kernel to optimize. Long story short, if you want your network to handle multiple image sizes, I advise you to use the pattern convolutions-> globalAveragePooling - > Dense if you want to do classification. Or only convolutional layers if you work on segmentation/artifact removals/ hyperresolution. I hope it helps. |
Hello, Thanks for the help, and a good year to you, I tryed your fix, which makes my model like that : model = Sequential() model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(None,None,1))) but I still get the same message : Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (52, 1) do you have any other idea? |
This means that there is a problem with the shape of your numpy array. The network wants your array to have this shape: |
that's because I cannot have an array with this format, as my different images don't have the same size, unless there is a way to do it in python that eluded me, I tryed concatenating my "images" (which are 2d arrays, all of different size) in an array called dataset in this way : datasets = [] task.get_dataset().get_data() returns a 2d array, and in the end I get an array of 52 2d arrays. I tried to find a way to change it to a 4d array of doubles but didn't find one |
You have to resize your images so that they all have the same sizes. so that you can batch them together in one single numpy array. Scipy have a resize function if I remember correctly. But this is a classical problem of deep learning, and doesn't concern keras API anymore. You can start by looking at this: https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imresize.html If this is all with the keras API, please don't forget to close the issue. |
Yes, in a normal case that's what I would have done, but my main problem is that I cannot resize my 2d arrays in a meaningful/neutral way (those are not "images" strictly speaking) if I resize my images it defeats the point of the experience. That's why I was looking for a way to train a neural network with 2D arrays of varying size (hence the title). And it's quite strange that you are telling me it is not possible as there are threads saying that it is, but never explain fully how, I just got some people saying that a batch size of 1 would sove that. I guess I will have to drop it, or find another way. |
Hi Gabartas,
|
Hi @gabartas |
Hi,
I did not find any such feature in keras where I can input images of
various sizes, so i ended up finding the max width and height of the image
in dataset (not batch)
Then border padding the images with a delta of zeros to all images and
resizing them.
You can border pad it with mean pixels or any method. I chose zeros to weed
them out in max pooling later.
Best wishes,
Mohammad Abuzar
…On Thu, Jun 28, 2018, 6:35 AM Pranav Rastogi ***@***.***> wrote:
Hi @gabartas <https://github.com/gabartas>
Were you able to find a solution without resizing/cropping the images?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8769 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYb0yYt2TphGmfjEL9iZJkWPgNrPrLn5ks5uBLF2gaJpZM4Q-0ja>
.
|
Hi, @mshaikh2 I am also working on a similar scenario, would be great if you can share your implementation. Thanks |
Hello, and sorry if a similar question have been asked, but I am reaching a deadend in my research for a solution for this problem.
I am trying to train a keras CNN on 2d datasets, but those datasets have a highly varying size, and padding them would have little sense. I understood that it was possible to train a CNN with variable training data size using batches of size 1, but the model.fit still won't work. As I am starting with keras, I could have missed something, but I keep getting this message :
Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (52, 1)
yet, here is a simplified version of my code :
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
import numpy as np
Load dataset
dataset = np.load("openml\datasets.npy")
labels = np.load("openml\labels.npy")
Y = []
#input and output
X = dataset
Y = np_utils.to_categorical(Y, 5)
create model
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(None,None,1)))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dense(128, activation='relu'))
model.add(Dense(5, activation='softmax'))
Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Fit the model
model.fit(X, Y, epochs=50, batch_size=1)
model.save('meddling_model.h5')
From what I understood, the input_shape=(None,None,1) should allow the CNN to manage inputs of varying sizes. I think the problem is caused by the fact that my dataset variable is a list of arrays, with a "shape" of ([number of datasets],1) and each arrays having a very different size as they are quite dissimilar 2d datasets.
I know that 52 training samples seems to not be enough, I was using these ones in order to check if it was possible to do what I'm trying to do before using my full training data.
Sorry for the long question, and thank you if you read all that through.
The text was updated successfully, but these errors were encountered: