Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

efficiency question #1

Closed
connectdotz opened this issue Jun 6, 2018 · 1 comment
Closed

efficiency question #1

connectdotz opened this issue Jun 6, 2018 · 1 comment

Comments

@connectdotz
Copy link

connectdotz commented Jun 6, 2018

Hi, thanks for the example.

Wondering by creating a lambada layer, does it make the process less efficient? i.e. during each training epoch, does the model end up encoding the sentence repeatedly? Will it be more "efficient" to encode the data once up front? Or maybe I am missing some benefit for the dynamic embedding approach implemented here...? Maybe when consider the embedding for trainable? but then we could still do static encoding while using Keras's Embedding layer for training... no?

@connectdotz
Copy link
Author

never mind, I think I got it now. It is really due to the training aspect. Using the lambda layer makes sense for a trainable model within the graph instead of the static lookup encoding... the outdated keras example also contributed to the confusion...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant