New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parse_example can be _much_ faster than parse_single_example #390

skearnes opened this Issue Dec 1, 2015 · 2 comments


None yet
5 participants

skearnes commented Dec 1, 2015

FYI I am working with Example protos for model input, and I am learning that use tf.parse_example to parse a (shuffled) batch of serialized examples is much faster than using tf.parse_single_example prior to batching. For my particular dataset, using parse_single_example allows me to create feed_dicts with batch size 128 at about 100/min; batching the serialized Example protos and then using parse_example is running at around 3000/min.

You may want to update the documentation to suggest using tf.parse_example everywhere, as is suggested when using sparse input data.


This comment has been minimized.


vrv commented Dec 23, 2015

Want to send us a PR to fix?


This comment has been minimized.

hkxIron commented Nov 28, 2017

I think parse_example is much faster than parse_single_example. in my case , speed change from 25000 sample/second to 32000 sample/second

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment