You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was unable to find where this is discussed, but I think we do not mention that 8 byte padding must not be applied when the buffer is compressed, as it causes us to lose the size of the compressed buffer.
For example
import pyarrow.ipc
data = [
pyarrow.array([1, 2, 3, 4, 5], type="int32"),
]
batch = pyarrow.record_batch(data, names=['f0'])
with pyarrow.OSFile('test1.arrow', 'wb') as sink:
with pyarrow.ipc.new_file(sink, batch.schema, options=pyarrow.ipc.IpcWriteOptions(compression="zstd")) as writer:
writer.write(batch)
which has 37 bytes (padding would require 40 bytes).
My understanding is that we do not pad because doing so make us unable to recover the original size of the (compressed) data, and offers no advantage since users can't mmap data anyways.
I was unable to find where this is discussed, but I think we do not mention that 8 byte padding must not be applied when the buffer is compressed, as it causes us to lose the size of the compressed buffer.
For example
outputs a single data buffer with
which has 37 bytes (padding would require 40 bytes).
My understanding is that we do not pad because doing so make us unable to recover the original size of the (compressed) data, and offers no advantage since users can't mmap data anyways.
Reporter: Jorge Leitão / @jorgecarleitao
Note: This issue was originally created as ARROW-15687. Please see the migration documentation for further details.
The text was updated successfully, but these errors were encountered: