in-place transcode when output size is known #134
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposal for #132
This is an initial proposal to tackle in-place decoding by providing the correctly-sized buffer.
It would be hugely beneficial for Arrow format, which is the workhorse of modern data analytics (even Pandas is moving to it as a backend). I've done a quick benchmark and Arrow.jl is among the slowest parsers, especially with compressed files. Based on profiling, the time spent resizing buffers in
transcode()
is roughly the same as the decoding itself!Thanks to Arrow IPC file specifications, we always know the output size of a field, however, at the moment we cannot take advantage of it.
This PR defines a new function
transcode!
that would write into a user-provided Buffer.Benefits are quantified below in the code snippet (eg, 3x faster processing with large texts).
I haven't yet explored the failure modes, I just wanted to start the discussion.