You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a recommender system engineer, I want to read large batch of tabular data from Parquet files efficiently, so that training performance of large deep recommenders can be improved.
Detailed requirements
It should be easy to work with existing Dataset based data pipeline.
It should be optimized for extra large batch size, and utilize features of Parquet format, e.g. column selection, batch reading , and row group filtering.
It should be compatible with vanilla TensorFlow >= 1.14 < 2.0 .
API Compatibility
Only new APIs should be introduced.
Willing to contribute
Yes
The text was updated successfully, but these errors were encountered:
User Story
As a recommender system engineer, I want to read large batch of tabular data from Parquet files efficiently, so that training performance of large deep recommenders can be improved.
Detailed requirements
API Compatibility
Willing to contribute
Yes
The text was updated successfully, but these errors were encountered: