-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help: 'Wrong number of dimensions: expected 3, got 2 with shape (32L, 60L).' in LSTM model #1641
Comments
You will need to do that, but then you shouldn't put X_train.shape as the |
Thank you! Reshaping the arrays and adding .shape[1:] lets it run. May I ask why the input shape needs to be .shape[1:]? Also, (off topic) the output looks like:
Is there a reason the loss is negative? |
So X.shape is (samples, timesteps, dimension), but the model architecture doesn't care about many training examples (samples) you have. Once you've built the model you can feed it a hundred million examples, doesn't matter. So you don't pass that as a parameter when you build your model. So X.shape[1:] is just (timesteps, samples) the two dimensions that matter Incidentally if you're on a Theano backend you also don't need to specify the number of timesteps, but you need to pass "None" for that dimension, then.So instead you would pass in As to why your score is negative: there's still something a bit fishy with your model. Your LSTM has 128 output dimensions and then you're evaluating binary cross-entropy on that? Is your Or else you probably meant to use a different objective function. Without knowing more about your data (for instance the size of your y matrix) it's hard for me to help further. |
I'm guessing you meant (timesteps, dimension)? That makes sense, though. Thank you for the information. As for the output data, yes, a _binary__crossentropy loss function doesn't make much sense, considering the data look like:
Where the first list contains sequences of input (which are themselves lists), and the output is a single float value. I've changed the model: batch_size = 32
print('Loading data...')
(X_train, y_train), (X_test, y_test) = t.LoadData()
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
X_train = np.reshape(X_train, X_train.shape + (1,))
X_test = np.reshape(X_test, X_test.shape + (1,))
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('Build model...')
model = Sequential()
model.add(LSTM(1, input_shape=X_train.shape[1:]))
model.compile(loss='mse',
optimizer='sgd',
class_mode="categorical")
print("Train...")
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=3,
validation_data=(X_test, y_test), show_accuracy=True)
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size,
show_accuracy=True)
print('Test score:', score)
print('Test accuracy:', acc) And it now produces output closer to the desired result:
I'll keep plugging away. :) |
For most applications you would probably want more than 1 hidden state on your LSTM! You can put a Dense layer (or TimeDistributedDense) with an output dimension of 1 to project the hidden state down to 1 dimension on output, while still retaining more than 1 dimension of state. So something like: model.add(LSTM(128, input_shape=X_train.shape[1:])) |
i am trying to discover an algorithm for iterative forecast using LSTM. seems to be something wrong with the code. would you be kind enough to help? error that i am getting 'Error when checking : expected lstm_2_input to have 3 dimensions, but got array with shape (1, 46)' |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
Hi, I have an input data with three variables / dimensions. with 4080 total samples. I am trying the below RNN script but getting the error. model=Sequential() ERROR: Error when checking input: expected gru_1_input to have 3 dimensions, but got array with shape (4080, 3) |
@wxs Don't you think there is something fishy in @DanHenry4 work as he is getting the same loss after each epoch and accuracy is always 1 (100%), which is near to impossible in most of machine learning predictions and specially in stock price prediction. I am also getting the loss 0.0 , therefore i am confused, may be i did something wrong. Please reply me on this, I am using LSTM for the first time and I am confused by seeing the accuracy that may be I am doing something wrong. Your guidance will be appreciated. Thanks, |
if the matrix size is different in the test and the data on which model was trained than what can I do? Keras in r. my results are really poor just because I've to add dummy columns to match matrix size. |
Hey everyone,
I'm trying to use custom data on the LSTM model, but it keeps giving shape errors. After reading some other issues along the same lines, I even tried reshaping the input data to size (nb_inputs, timestamps, 1) which looks approximately like (4200, 60, 1), but that returns an error that says a shape of (None, 4200, 60, 1) is no good. Any thoughts?
Output:
The text was updated successfully, but these errors were encountered: