Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue in /models/recommendation/tensorflow (by P3) #90

Open
DLPerf opened this issue Aug 29, 2021 · 2 comments
Open

Performance issue in /models/recommendation/tensorflow (by P3) #90

DLPerf opened this issue Aug 29, 2021 · 2 comments

Comments

@DLPerf
Copy link

DLPerf commented Aug 29, 2021

Hello! I've found a performance issue in /wide_deep/inference/fp32/wide_deep_inference.py: dataset.batch(batch_size)(line 192) should be called before dataset.map(parse_csv, num_parallel_calls=5)(line 187), which could make your program more efficient.

Here is the tensorflow document to support it.

Besides, you need to check the function parse_csv called in dataset.map(parse_csv, num_parallel_calls=5) whether to be affected or not to make the changed code work properly. For example, if parse_csv needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z) after fix.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

@dmsuehir
Copy link
Contributor

@DLPerf Thanks for bringing up the issue. If you'd like to create a PR, that would great.

ashahba added a commit that referenced this issue Oct 16, 2021
Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
ashahba added a commit that referenced this issue Apr 1, 2022
Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
@sramakintel
Copy link
Contributor

@DLPerf do you still need assistance with this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants