-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding Tensorflow/Tflearn LSTM input? #8
Comments
n_words here is your dictionary size, it means that you have a total of 20000 words, so every sentence of your dataset can be parsed into a list of integers (a word index id) belonging to [0 : 20000] In NLP, embedding is often use to parse such example into a more meaningful representation, because these integers just represents 'words', and you have no ways to compare them together (you can't say if word '1' is greater, inferior or whatever to word '2'). So, for example, [5, 3, 4] will be parsed to (for output_dim=3) [ [0.0, 1.2, 3.4], [2.5, 4.9, 0.4], [2.0, 5.2, 3.1] ]. That representation will be learned with the model (So your model will learn by itself relations between words). In your case, you maybe doesn't need embedding and can directly apply the LSTM to your features. You just need to parse your data as follow: [number of samples, timesteps, data_dimension] timesteps represents your sequence length. |
Thanks a lot. That helped clear a lot of doubts I Had. I have now Built a model with the following structure I have two questions.
|
What returns Y=nextbatch(train)? X and Y should have same number of samples. And Y should be a one-hot vector (binary vector) if you are using categorical_crossentropy. You can directly feed all your data X and labels Y, tflearn will make batches itself according to 'batch_size'. About the serialization, there was a mistake in TFLearn, that is corrected now (#9).
|
Basically I have more than 3 million samples on the train set and around 500,000 samples on the test/val set. For eg.- In this native Tensorflow code code
|
I see, you data are very large, so you probably can't fit them in your RAM memory. The best way for you is to use HDF5 to handle large datasets, it is compatible with TFLearn. Basically, it will load your data from your disk directly, instead of loading your data into RAM memory, so you can handle GBs of data without problem. |
net = tflearn.input_data(shape=[None, 16, 4096],name='input') // 16 is time steps and 4096 is each instance of vector/sample. This is not clear to me can you provide me a simple example. For example i have CSV file which has 100 rows and 10 columns and first row is target and 9 features. In this case how to replace 16 and 4096. net = tflearn.lstm(net, 256, return_seq=True) // why you used 256 |
@aymericdamien How can I use the same input format [number of samples, timesteps, data_dimension] in tf.scan() for making a custom rnn. Any example for that ? |
I have some trouble understanding LSTM . For simplicity lets consider the Example program.
I use the tflearn as a wrapper as it does all the initialization and other higher level stuff automatically. Thank you for that. :-)
Till line number 42
net = tflearn.input_data([None, 200])
it's pretty clear what happens. You load a data-set into variables and make it of a standard lengths in this case 200. for both the input variables and also the the 2 classes present in this case are converted to one hot vectors.What I would like to know here is how the LSTM takes the input and across how many samples does it it predict the output ? here the
n_words=20000
&net = tflearn.embedding(net, input_dim=20000, output_dim=128)
What do these parameters indicate?My goal is to replicate the Activity recognition data set in the paper.
For example I would like to input a
fc6 4096 vector
as input to the lstm and the idea is to take 16 of such vectors and then produce the classification result.I think the code would look like this but I dont know how the input to the LSTM should be given.
Could any one help me understand how I input data to the LSTM.
The text was updated successfully, but these errors were encountered: