-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to feed tf.hub output to LSTM #42
Comments
I'd also like to do this for analysing text as a sequence. It says in Module google/nnlm-en-dim50-with-normalization/1 that it "preprocesses its input by removing punctuation and splitting on spaces". But the result is as said above. Are the results of embedding lookup for each word then combined back into a single vector output for the sequence? How can we get the sequence of tokens with word embedding back rather than just |
Yes, the results of embedding lookup for each word is combined back into a single vector output for the sequence (phrase, sentence, etc.). You can pass individual words to the module instead of phrases or sentences, and then you'll get the embedding vectors for the individual words that you can use/combine in any way you want. Some embeddings use n-grams in their vocabularies, so it's usually preferable to let the module do tokenization internally, but for the application you mention you might want to tokenize/combine yourself. |
@svsgoogle I agree that we can preprocess ourselves, but the input dimension expected is Example import tensorflow_hub as hub
import numpy as np
embed = hub.Module("https://tfhub.dev/google/nnlm-en-dim50-with-normalization/1")
data = np.asarray(["cat is on the mat".split(), "dog is in the fog".split()])
print(data.shape) # (2, 5) i.e. batch_size = 2, sequence_length = 5
embed(data) Output
|
Assuming that after preprocessing, your string tensor is a dense tensor (this will be needed to feed into LSTM anyway), you can reshape to [None] before passing to the module, then reshape back:
|
For now, I used nnlm-en-dim128 embedding https://www.tensorflow.org/hub/modules/google/nnlm-en-dim128/1 which gives me acceptable accuracy. Thanks, |
A small example of using this from @vbardiovskyg's code snippet: SEQ_LENGTH = 5
EMBEDDING_DIM = 50
with tf.Graph().as_default() as g:
embed_layer = hub.Module(
f"https://tfhub.dev/google/nnlm-en-dim{EMBEDDING_DIM}-with-normalization/1",
trainable=False, name='text_embedding')
sentences = tf.placeholder(dtype=tf.string, shape=(None, SEQ_LENGTH))
batch_size = tf.shape(sentences)[0]
flat_sentences = tf.reshape(sentences, [-1])
embeddings = embed_layer(flat_sentences)
sentence_embedding = tf.reshape(embeddings,
[batch_size, SEQ_LENGTH, EMBEDDING_DIM])
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
output = sess.run(sentence_embedding, feed_dict={
sentences: [
"cat is on the mat".split(),
"dog is in the fog".split(),
"padded sentence UNK UNK UNK".split()]
})
print(output.shape)
# (3, 5, 50) You'd want better handling of padding/trimming sequences, but I normally do that with the |
Hello There,
Since tf.hub output is [?,512] and LSTM need [Batch_Size,Time_Frame,512]. How do we feed tf.hub output to LSTM.
Thanks,
Sachin B. Ichake
The text was updated successfully, but these errors were encountered: