-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Task] - Incorporate embeddings (pre-trained and encoder-based) into Merlin Training and Inference #471
Comments
Removing the milestone 22.09. This is not in the POR yet. |
@benfred , @jperez999 , |
Edit: sorry, this comment is unrelated to this issue (Transformers4Rec). Related Merlin Models (Tensorflow). For small embedding tables (fit in GPU memory). Using a tensor initializer is the currently supported functionality (in Merlin Models) for pre-trained embedding tables. An example in this notebook: |
@MarkMoTrin When you say "encoder-based embeddings as additional features," does that imply encoding an item attribute (like "text description") using an external model (like BERT) and supplying the resulting embeddings as features for each item in a session? We're not talking about adding a single embedding as a session-level feature, we're talking about adding a list of embeddings as item-level features, yeah? |
When the embedding table are not huge and fit GPU memory, the new |
Problem:
Two+ customer teams would like to leverage encoder-based embeddings as additional features in Transformers4Rec. Currently, there is no clear example/path to pass in extra embeddings. This gap in feature prevents teams from adopting Transformers4Rec in production.
Goal:
Constraints:
Starting Point:
The text was updated successfully, but these errors were encountered: