Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Keras APIs and fix save-restore issue in TF2 #119

Merged
merged 1 commit into from
Aug 19, 2021
Merged

Support Keras APIs and fix save-restore issue in TF2 #119

merged 1 commit into from
Aug 19, 2021

Conversation

Lifann
Copy link
Member

@Lifann Lifann commented Jul 17, 2021

Description

This PR makes Keras APIs become compatible to TFRA, and also fixes the save/restore issue in TensorFlow 2.x version.

Thanks to @ccsquare 's suggestions on save/restore.

Main Features

1. Support object-based programming

2. Add a demo on amazon digital video games in object-based APIs.

3. Provide fine-grained API to access non-explicitly defined dynamic_embedding.Variable and deprecate dynamic_embedding.GraphKeys.

4. Fix relative bugs:

@Lifann Lifann requested review from ccsquare and rhdong July 17, 2021 13:04
@Lifann Lifann changed the title Lifann/fix recreation in eager mode Support Keras APIs and fix save-restore issue in TF2 Jul 17, 2021
@rhdong rhdong requested a review from Mr-Nineteen July 20, 2021 11:42
@Lifann
Copy link
Member Author

Lifann commented Aug 3, 2021

Relative issues: #44 #47 #115

@@ -771,7 +816,8 @@ def safe_embedding_lookup_sparse(
entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the
default.
default_id: The id to use for an entry with no features.
name: A name for this operation (optional).
name: A name for this operation. Name is optional in graph mode and required
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to confirm the name is set in eager mode?


# TODO(Lifann) Use more resonable name.
def _create_trainable(trainable_name):
return de.TrainableWrapper(params,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz check here , there is assume that 'TrainableWrapper' is always appear in the TW name, and the placer won't place TW onto PSs for multi-thread safe . device placement

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accept. Use ops.colocate_with to force the TW to reside with ids.

@@ -81,10 +82,11 @@ def __init__(self, params, ids, max_norm, *args, **kwargs):
self.prefetch_values_op = None
self.model_mode = kwargs.get("model_mode")
kwargs.pop("model_mode")
self._slots_track = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_slots_tracked or _tracked_slots maybe better.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree

@@ -443,7 +464,8 @@ def embedding_lookup(
params: A dynamic_embedding.Variable instance.
ids: A tensor with any shape as same dtype of params.key_dtype.
partition_strategy: No used, for API compatiblity with `nn.emedding_lookup`.
name: A name for the operation (optional).
name: A name for the operation. Name is optional in graph mode and required
in eager mode.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a check is needed to make sure the name is set in eager mode.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's checked when creating TW. I moved it to the front of func entry now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants