You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when read ByteString.readBytes, the order is InputStream -> StreamDecoder.buffer with 4kB size -> copy StreamDecoder.buffer to byte array on heap. Can we change the order to InputStream -> byte array on heap directly ? Besides, instead use byte array on heap, can we use byte array off-heap ?
As the image shows, arraycopy in StreamDecoder.readBytesSlowPath really cost too much CPU. I want to improve it by avoid arraycopy. If there is other suggestions, I would appreciate it, thank you very much.
The text was updated successfully, but these errors were encountered:
runzhiwang
changed the title
Zero copy when decode the message.
How to zero copy when decode the message.
Sep 21, 2020
runzhiwang
changed the title
How to zero copy when decode the message.
How to zero copy when decode the message ?
Sep 21, 2020
runzhiwang
changed the title
How to zero copy when decode the message ?
How to support zero copy when decode the message ?
Sep 22, 2020
As explained in the comment you linked, unfortunately it is impossible to return the byte array directly because it must be copied to guarantee immutability.
What language does this apply to?
proto3, java
Describe the problem you are trying to solve.
My request is similar to https://groups.google.com/g/protobuf/c/ZaDigptdcHM
Currently, when read ByteString.readBytes, the order is InputStream -> StreamDecoder.buffer with 4kB size -> copy StreamDecoder.buffer to byte array on heap. Can we change the order to InputStream -> byte array on heap directly ? Besides, instead use byte array on heap, can we use byte array off-heap ?
As the image shows, arraycopy in StreamDecoder.readBytesSlowPath really cost too much CPU. I want to improve it by avoid arraycopy. If there is other suggestions, I would appreciate it, thank you very much.
The text was updated successfully, but these errors were encountered: