-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature]support ragged tensor in all2all #428
Conversation
f4bb954
to
c0a0e38
Compare
090c093
to
f44f0fd
Compare
# ... | ||
|
||
def _get_processor(v): | ||
"""The processor of v.""" | ||
# ... | ||
if isinstance(v, resource_variable_ops.TrainableWrapper): | ||
if isinstance(v, tensorflow_recommenders_addons.dynamic_embedding.python.ops.embedding_variable.TrainableWrapper): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no more concerns except the naming here @jq
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
d130050
to
4157f74
Compare
3e981d4
to
f6d09a6
Compare
8fe1de1
to
26c6ddf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
support ragged tensor in all2all HvdAllToAllEmbedding lookup
refactor embedding variable, introduce embedding_weights interface so that dynamic embedding variable, shadow variable, hvd variable can be easily supported in the lookup and safe_embedding_lookup_sparse
Brief Description of the PR:
Fixes # (issue)
Type of change
Checklist:
How Has This Been Tested?
If you're adding a bugfix or new feature please describe the tests that you ran to verify your changes:
*