Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding Tensorflow/Tflearn LSTM input? #8

Closed
ashwinnair14 opened this issue Apr 5, 2016 · 7 comments
Closed

Understanding Tensorflow/Tflearn LSTM input? #8

ashwinnair14 opened this issue Apr 5, 2016 · 7 comments

Comments

@ashwinnair14
Copy link

I have some trouble understanding LSTM . For simplicity lets consider the Example program.

I use the tflearn as a wrapper as it does all the initialization and other higher level stuff automatically. Thank you for that. :-)

Till line number 42 net = tflearn.input_data([None, 200]) it's pretty clear what happens. You load a data-set into variables and make it of a standard lengths in this case 200. for both the input variables and also the the 2 classes present in this case are converted to one hot vectors.

What I would like to know here is how the LSTM takes the input and across how many samples does it it predict the output ? here the n_words=20000 & net = tflearn.embedding(net, input_dim=20000, output_dim=128) What do these parameters indicate?

My goal is to replicate the Activity recognition data set in the paper.

For example I would like to input a fc6 4096 vector as input to the lstm and the idea is to take 16 of such vectors and then produce the classification result.

I think the code would look like this but I dont know how the input to the LSTM should be given.
Could any one help me understand how I input data to the LSTM.

from __future__ import division, print_function, absolute_import

import tflearn
from tflearn.data_utils import to_categorical, pad_sequences
from tflearn.datasets import imdb

train, val = something.load_data()
trainX, trainY = train #each X sample is a (16,4096) nd float64 
valX, valY = val #each Y is a one hot vector of 101 classes.

net = tflearn.input_data([None, 16,4096])
net = tflearn.embedding(net, input_dim=4096, output_dim=256)
net = tflearn.lstm(net, 256)
net = tflearn.dropout(net, 0.5)
net = tflearn.lstm(net, 256)
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 101, activation='softmax')
net = tflearn.regression(net, optimizer='adam',
                         loss='categorical_crossentropy')

model = tflearn.DNN(net, clip_gradients=0., tensorboard_verbose=3)
model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True,
          batch_size=128,n_epoch=2,snapshot_epoch=True)
@aymericdamien
Copy link
Member

What I would like to know here is how the LSTM takes the input and across how many samples does it it predict the output ? here the n_words=20000 & net = tflearn.embedding(net, input_dim=20000, output_dim=128) What do these parameters indicate?

n_words here is your dictionary size, it means that you have a total of 20000 words, so every sentence of your dataset can be parsed into a list of integers (a word index id) belonging to [0 : 20000]
For example: if your dictionary is ['I', 'hello', 'are', 'you', 'how'], parsing "how are you" will give [5, 3, 4], this dictionary size is 6 (because index 0 is reserved for parsing any word that isn't in your dictionary).

In NLP, embedding is often use to parse such example into a more meaningful representation, because these integers just represents 'words', and you have no ways to compare them together (you can't say if word '1' is greater, inferior or whatever to word '2'). So, for example, [5, 3, 4] will be parsed to (for output_dim=3) [ [0.0, 1.2, 3.4], [2.5, 4.9, 0.4], [2.0, 5.2, 3.1] ]. That representation will be learned with the model (So your model will learn by itself relations between words).

In your case, you maybe doesn't need embedding and can directly apply the LSTM to your features. You just need to parse your data as follow: [number of samples, timesteps, data_dimension] timesteps represents your sequence length.

@ashwinnair14
Copy link
Author

Thanks a lot. That helped clear a lot of doubts I Had.

I have now Built a model with the following structure

I have two questions.

  1. How do I input batches to the fit function ? cause the total number of samples are 1Millon range ?
    For eg:- after eatch iteration instead of moving along X,Y can I do something like

model.fit( X,Y=nextbatch(train), n_epoch=20, validation_set=(testX, testY=nextbatch(test)), show_metric=True,run_id='LSTM',snapshot_epoch=True)

  1. I think I found a problem with Warnings in serialization.It is a closed issue in tensorflow.
    I'm already using the latest release of tensorflow. Would It be ok If I ignore this warnings ?
WARNING:tensorflow:Error encountered when serializing layer_variables/LSTM/.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing layer_variables/LSTM_1/.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing layer_variables/LSTM/.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing layer_variables/LSTM_1/.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'

X, Y=loadbatch_from_lists(train_files,train_classes) 
# returns a batch of [24,16,4096] Where 24 is batch size 16 is time steps and 4096 is each instance of vector/sample
testX, testY = loadbatch_from_lists(train_files,train_classes)

net = tflearn.input_data(shape=[None, 16, 4096],name='input')
net = tflearn.lstm(net, 256, return_seq=True)
net = tflearn.dropout(net,0.5)
net = tflearn.lstm(net, 256)
net = tflearn.dropout(net,0.5)
net = tflearn.fully_connected(net, 101, activation='softmax')
net = tflearn.regression(net, optimizer='adam',
                         loss='categorical_crossentropy', name="target",learning_rate=0.001)
model = tflearn.DNN(net, tensorboard_verbose=3,tensorboard_dir=log_dir,checkpoint_path='LSTM_model.tfl.ckpt')

model.fit( X,  Y, n_epoch=20, validation_set=(testX,  testY), show_metric=True,
          run_id='LSTM',snapshot_epoch=True)
model.save('LSTM_model.tfl')

@aymericdamien
Copy link
Member

model.fit( X,Y=nextbatch(train), n_epoch=20, validation_set=(testX, testY=nextbatch(test)), show_metric=True,run_id='LSTM',snapshot_epoch=True)

What returns Y=nextbatch(train)? X and Y should have same number of samples. And Y should be a one-hot vector (binary vector) if you are using categorical_crossentropy. You can directly feed all your data X and labels Y, tflearn will make batches itself according to 'batch_size'.

About the serialization, there was a mistake in TFLearn, that is corrected now (#9).
you can update tflearn:

pip uninstall tflearn
pip install git+https://github.com/tflearn/tflearn.git

@ashwinnair14
Copy link
Author

Basically I have more than 3 million samples on the train set and around 500,000 samples on the test/val set.
So I cant load the whole train/test data set into local variables I receive a MemoryError.
I was wondering if I could call a function that populates the X,Y with a new batch in the model.fit().
For eg:- new_x,new_y =load_new_batch(Train) #this function returns new_X with size [number of samples, timesteps, data_dimension] and new_Y is the one hot labels [batch_size,one_hot_labels]

For eg.- In this native Tensorflow code code

batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys, keep_prob: dropout})

@aymericdamien
Copy link
Member

I see, you data are very large, so you probably can't fit them in your RAM memory. The best way for you is to use HDF5 to handle large datasets, it is compatible with TFLearn. Basically, it will load your data from your disk directly, instead of loading your data into RAM memory, so you can handle GBs of data without problem.
You just need to save your data into h5 format (https://www.hdfgroup.org/HDF5/, http://www.h5py.org/).
One basic example that show hdf5 compatibility with TFLearn: https://github.com/tflearn/tflearn/blob/master/examples/basics/use_hdf5.py (It is very easy, because you can just fit(X,Y) and train the whole dataset).

@vinayakumarr
Copy link

vinayakumarr commented May 25, 2016

net = tflearn.input_data(shape=[None, 16, 4096],name='input') // 16 is time steps and 4096 is each instance of vector/sample. This is not clear to me can you provide me a simple example. For example i have CSV file which has 100 rows and 10 columns and first row is target and 9 features. In this case how to replace 16 and 4096.

net = tflearn.lstm(net, 256, return_seq=True) // why you used 256
net = tflearn.dropout(net,0.5) // why you used 0.5

@rajarsheem
Copy link

@aymericdamien How can I use the same input format [number of samples, timesteps, data_dimension] in tf.scan() for making a custom rnn. Any example for that ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants