Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to feed tf.hub output to LSTM #42

Closed
SachinIchake opened this issue Apr 26, 2018 · 6 comments
Closed

How to feed tf.hub output to LSTM #42

SachinIchake opened this issue Apr 26, 2018 · 6 comments

Comments

@SachinIchake
Copy link

Hello There,

Since tf.hub output is [?,512] and LSTM need [Batch_Size,Time_Frame,512]. How do we feed tf.hub output to LSTM.

Thanks,
Sachin B. Ichake

@damienpontifex
Copy link

I'd also like to do this for analysing text as a sequence. It says in Module google/nnlm-en-dim50-with-normalization/1 that it "preprocesses its input by removing punctuation and splitting on spaces". But the result is as said above. Are the results of embedding lookup for each word then combined back into a single vector output for the sequence?

How can we get the sequence of tokens with word embedding back rather than just (batch_size, embedding_size) result?

@svsgoogle
Copy link
Contributor

Yes, the results of embedding lookup for each word is combined back into a single vector output for the sequence (phrase, sentence, etc.).

You can pass individual words to the module instead of phrases or sentences, and then you'll get the embedding vectors for the individual words that you can use/combine in any way you want. Some embeddings use n-grams in their vocabularies, so it's usually preferable to let the module do tokenization internally, but for the application you mention you might want to tokenize/combine yourself.

@damienpontifex
Copy link

@svsgoogle I agree that we can preprocess ourselves, but the input dimension expected is TensorShape([Dimension(None)]) which doesn't work if we are passing a batch of data to the module. Is there another way of passing a batched dataset?

Example

import tensorflow_hub as hub
import numpy as np
embed = hub.Module("https://tfhub.dev/google/nnlm-en-dim50-with-normalization/1")
data = np.asarray(["cat is on the mat".split(), "dog is in the fog".split()])
print(data.shape) # (2, 5) i.e. batch_size = 2, sequence_length = 5
embed(data)

Output

TypeError: Can't convert 'default': Shape TensorShape([Dimension(2), Dimension(5)]) is incompatible with TensorShape([Dimension(None)])

@vbardiovskyg
Copy link
Contributor

Assuming that after preprocessing, your string tensor is a dense tensor (this will be needed to feed into LSTM anyway), you can reshape to [None] before passing to the module, then reshape back:

words = tf.constant(["cat is on the mat".split(), "dog is in the fog".split()])
words = tf.reshape(words, [-1])
result = embed(words)
result = tf.reshape(result, [2, 5, 128]) # the second array can be constructed with tf.concat, tf.shape(words) and [-1].

@SachinIchake
Copy link
Author

For now, I used nnlm-en-dim128 embedding https://www.tensorflow.org/hub/modules/google/nnlm-en-dim128/1 which gives me acceptable accuracy.

Thanks,
Sachin B. Ichake

@damienpontifex
Copy link

A small example of using this from @vbardiovskyg's code snippet:

SEQ_LENGTH = 5
EMBEDDING_DIM = 50

with tf.Graph().as_default() as g:
  
  embed_layer = hub.Module(
    f"https://tfhub.dev/google/nnlm-en-dim{EMBEDDING_DIM}-with-normalization/1", 
    trainable=False, name='text_embedding')
  
  sentences = tf.placeholder(dtype=tf.string, shape=(None, SEQ_LENGTH))
  batch_size = tf.shape(sentences)[0]
  
  flat_sentences = tf.reshape(sentences, [-1])

  embeddings = embed_layer(flat_sentences)
  
  sentence_embedding = tf.reshape(embeddings, 
                                  [batch_size, SEQ_LENGTH, EMBEDDING_DIM])

  with tf.Session(graph=g) as sess:
    sess.run(tf.global_variables_initializer())
    sess.run(tf.tables_initializer())

    output = sess.run(sentence_embedding, feed_dict={
        sentences: [
            "cat is on the mat".split(), 
            "dog is in the fog".split(), 
            "padded sentence UNK UNK UNK".split()]
    })
    
    print(output.shape)
# (3, 5, 50)

You'd want better handling of padding/trimming sequences, but I normally do that with the tf.data API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants