-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LSTM - Sequences with different num of time steps #85
Comments
You can, but you would have to pad the shorter sequences with zeros, since all inputs to Keras models must be tensors. Here's an example of how to do it: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py#L46 Another solution would be to feed sequences to your model one sequence at a time (batch_size=1). Then differences in sequence lengths would be irrelevant. |
Thank you for the quick answer. So after padding, how the library knows to ignore the padded values (and not to use them in the training). |
It doesn't know. But it learns to ignore them: in practice, sequence padding won't noticeably impact training. But if you're worried about it, you can always use batches of size 1. |
I have been experimenting in my own implementations with output masks that manually set the error gradients for datapoints you don't want to train on to 0 and so they don't receive any updates. It would be a nice feature to have. |
There are two simple and most often implemented ways of handling this:
|
@fchollet Actually I have a similar (may look stupid) question: When I check the imdb data, e.g. X_train[1] after operating pad_sequences(nb_timesteps = 100), I notice that the result will only preserve the very last 100 words to make the sequence vector length 100. My question is why not the first 100 words? |
I faced the same problem, if I want to train by batchsize of size 1, which function should i use?? thanks |
@haoqi batch_size=1 in model.fit |
@paipai880429 it's a matter of point of view. Usually the stronger information is stored at the end of the comment rather than at the beginning. Anyway you have to make a choice. |
@fchollet Instead of using |
Hi, I'm planning to use the approach of "batch_size = 1" to allow for arbitrary input lengths. However, what dimensions should I use for the input_shape argument? For example: |
@aabversteeg Have you figured out what to do in your case? I am also facing this problem. |
I was not able to put anything to work, but I believe to allow for arbitrary sequence lengths you must supply None for the dimension that should be arbitrary. So in the above case you should replace maxlen with None. |
Hi everyone. I am starting to learn LSTM and I have a little doubt, what is a "time step"? a time step, is the length of the sequence?. I will really thanks your answer. |
A timestep is one step/element of a sequence. For example, each frame in a video is a timestep; the data for that timestep is the RGB picture at that frame. |
Hi everyone, I am working with video frames, my training data consists of video files of variable length due to which the no of timesteps (or the no of frames) in video files are variable. Can you please help me in using LSTM for such scenario. Thanks |
Hi!
I think you could you use sequence.pad_sequence(X_train, maxlen) from
keras. I had the same problem working with texts.
2017-04-06 5:25 GMT-05:00 anirudhgupta22 <notifications@github.com>:
… Hi everyone,
I am working with video frames, my training data consists of video files
of variable length due to which the no of timesteps (or the no of frames)
in video files are variable. Can you please help me in using LSTM for such
scenario.
Thanks
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#85 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AVRsKiD5RoTmqHkySaJ82I-bf8gCtSqcks5rtL2UgaJpZM4EJISM>
.
|
@fchollet |
I have images of different widths and fixed height of 46px and total samples are 1000. How should i define the input shape of my data to an LSTM Layer, using functional api??? |
@Binteislam convert them to the same width and height. That will be the simplest for you. |
@philipperemy Is there no other solution? If I convert them to fixed width it destroys my data as I am working on OCR, whereas if I pad them with black it wastes huge memory. |
@Binteislam I'm not aware of a better solution. Or maybe you can feed them one by one but you increases the computational time (batch size = 1) |
@philipperemy If I keep batch size=1, how to define the input shape? |
@Binteislam Hi I think you just need to set the dimension of timestep to None. But I am not sure how should you formulate the training data in this case. Add each image in a list? |
@Kevinpsk You cannot batch sequences of different length together. Other possibilities are:
|
@philipperemy Hi, Yeah i understand that after reading other people's posts. But in the case of video processing, each image at each timestep would be 2D, how shall I specify input shape then? Something like (batch_size, timestep, input_dim) = (1, None, (height, width))? In this case, I will be training video files of variable length one by one? |
@Kevinpsk In your case I would advise you to have a look at Conv3D https://keras.io/layers/convolutional/ It's specifically done for handling videos the same way a regular Conv Net handles images. If you still want to stick to your LSTM, then |
@philipperemy |
@habdullah yes it's possible and should work. The number of parameters of recurrent networks do not depend on the time length. It's only a batch problem that prevents from using different lengths. In your case it should work well. |
@philipperemy |
@habdullah ok cool. |
@patyork because your batch_size you pass as argument is not the same as each bucket maximum length? |
@lemuriandezapada Hi, I'm facing the same problem and think your idea is the way to solve this. Can you kindly show how to code your idea? Thanks. |
@LeZhengThu I'm not sure if this is still helpful or relevant, but I had the same problem so I wrote a generic bucketed Sequence class for exactly this purpose. It is used as a generator and sped up training for me in orders of magnitude (~100x faster, since some sequences were very long but that did not reflect the median sequence length). |
Hi @fchollet, using batch size=1, has some performance issues on GPU. It takes really long to train the sequence of variant length. Could you please guide. /Ashima |
Sorry to bring this up again @fchollet but I am having a problem with how to present the training data. I also want to analyse video and let's say I have 100 videos and in each 5 frames so 500 frames total. How do I build the training data so I can feed a 5D vector to my neural network? I suppose that the input shape should be (nb of frames, nb of sequence, rows, cols, channels) where nb of frames is 500 (?) and the nb of sequence is between 1 and 5 depending the order of the frame in each video. Am I thinking correctly? |
@habdullah i'm doing LSTM encoder-decoder, if i set input shape = (None,78) , do you have any idea of how to use RepeatVector(n) to let n matches the real shape[0] of input dynamically |
How can we use variable number of time steps per sample? number of features for each timestep remains same. e.g x_train = [ [[0.41668948], #1 [[0.65911036],#1 ] people are saying to use padding but i don’t know how padding will solve this ? what would be the shape of input array then? I have tried using None in input shape but it doesn't work
|
@Deltaidiots, following your example the padding means adding a value that is not present in your data as a marker of no data. For example, add a zero at the end of your first sequence.
this obviously works if you can assume 0.0 is not real value in your data. For example, if your values cannot be negative you could use a negative number or sum one to all numbers and use zero as padding. If there no simple transformation to make sure there is a value you can use as padding. You can increase the dimension if your data and use that extra dimension to make sure you can create a value that is not in your data domain. For example.
Then, Keras has a layer to tell this explicitely to the network. |
Hello, if you are using batch_size = 1, and return_sequence = True, I think I read somewhere that at every batch, the cell state is reset |
Hi,
Could you explain how this library is handling sequences with different number of time steps? Specifically - can we have sequences with different number of time steps and if so where one can supply the length of the sequence?
Thank you!
The text was updated successfully, but these errors were encountered: