You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The documentation for use_one_hot_embeddings says: "Logical; whether to use one-hot word embeddings or tf.embedding_lookup() for the word embeddings."
I can get most of that from the name of the parameter. What's the difference between those two options?
Also update .model_fn_builder_EF to inherit this param from extract_features (or vice versa?), so it's documented the same in both places. Actually, BertModel also uses it, and it looks like its "home" is embedding_lookup. The documentation there is slightly different, but neither really helps me grok what the difference is/when I'd want it to be TRUE.
The text was updated successfully, but these errors were encountered:
The documentation for use_one_hot_embeddings says: "Logical; whether to use one-hot word embeddings or tf.embedding_lookup() for the word embeddings."
I can get most of that from the name of the parameter. What's the difference between those two options?
Also update
.model_fn_builder_EF
to inherit this param fromextract_features
(or vice versa?), so it's documented the same in both places. Actually,BertModel
also uses it, and it looks like its "home" isembedding_lookup
. The documentation there is slightly different, but neither really helps me grok what the difference is/when I'd want it to be TRUE.The text was updated successfully, but these errors were encountered: