You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Wondering by creating a lambada layer, does it make the process less efficient? i.e. during each training epoch, does the model end up encoding the sentence repeatedly? Will it be more "efficient" to encode the data once up front? Or maybe I am missing some benefit for the dynamic embedding approach implemented here...? Maybe when consider the embedding for trainable? but then we could still do static encoding while using Keras's Embedding layer for training... no?
The text was updated successfully, but these errors were encountered:
never mind, I think I got it now. It is really due to the training aspect. Using the lambda layer makes sense for a trainable model within the graph instead of the static lookup encoding... the outdated keras example also contributed to the confusion...
Hi, thanks for the example.
Wondering by creating a lambada layer, does it make the process less efficient? i.e. during each training epoch, does the model end up encoding the sentence repeatedly? Will it be more "efficient" to encode the data once up front? Or maybe I am missing some benefit for the dynamic embedding approach implemented here...? Maybe when consider the embedding for trainable? but then we could still do static encoding while using Keras's Embedding layer for training... no?
The text was updated successfully, but these errors were encountered: