Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parse_example can be _much_ faster than parse_single_example #390

Closed
skearnes opened this issue Dec 1, 2015 · 2 comments
Closed

parse_example can be _much_ faster than parse_single_example #390

skearnes opened this issue Dec 1, 2015 · 2 comments
Labels
stat:contribution welcome Status - Contributions welcome type:docs-bug Document issues

Comments

@skearnes
Copy link

skearnes commented Dec 1, 2015

FYI I am working with Example protos for model input, and I am learning that use tf.parse_example to parse a (shuffled) batch of serialized examples is much faster than using tf.parse_single_example prior to batching. For my particular dataset, using parse_single_example allows me to create feed_dicts with batch size 128 at about 100/min; batching the serialized Example protos and then using parse_example is running at around 3000/min.

You may want to update the documentation to suggest using tf.parse_example everywhere, as is suggested when using sparse input data.

@martinwicke martinwicke added the type:docs-bug Document issues label Dec 2, 2015
@vrv
Copy link

vrv commented Dec 23, 2015

Want to send us a PR to fix?

@hkxIron
Copy link

hkxIron commented Nov 28, 2017

I think parse_example is much faster than parse_single_example. in my case , speed change from 25000 sample/second to 32000 sample/second

darkbuck pushed a commit to darkbuck/tensorflow that referenced this issue Jan 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:contribution welcome Status - Contributions welcome type:docs-bug Document issues
Projects
None yet
Development

No branches or pull requests

5 participants