-
Notifications
You must be signed in to change notification settings - Fork 24.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop Allocating Buffers in CopyBytesSocketChannel #49825
Merged
original-brownbear
merged 1 commit into
elastic:master
from
original-brownbear:netty-smarter-buffering
Dec 4, 2019
Merged
Stop Allocating Buffers in CopyBytesSocketChannel #49825
original-brownbear
merged 1 commit into
elastic:master
from
original-brownbear:netty-smarter-buffering
Dec 4, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The way things currently work, we read up to 1M from the channel and then potentially force all of it into the `ByteBuf` passed by Netty. Since that `ByteBuf` tends to by default be `64k` in size, large reads will force the buffer to grow, completely circumventing the logic of `allocHandle`. This seems like it could break `io.netty.channel.RecvByteBufAllocator.Handle#continueReading` since that method for the fixed-size allocator does check whether the last read was equal to the attempted read size. So if we set `64k` because that's what the buffer size is, then wirte `1M` to the buffer we will stop reading on the IO loop, even though the channel may still have bytes that we can read right away. More imporatantly though, this can lead to running OOM quite easily under IO pressure as we are forcing the heap buffers passed to the read to `reallocate`. Closes elastic#49699
original-brownbear
added
:Distributed/Network
Http and internode communication implementations
team-discuss
labels
Dec 4, 2019
Pinging @elastic/es-distributed (:Distributed/Network) |
Tim-Brooks
approved these changes
Dec 4, 2019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks Tim! |
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Dec 4, 2019
The way things currently work, we read up to 1M from the channel and then potentially force all of it into the `ByteBuf` passed by Netty. Since that `ByteBuf` tends to by default be `64k` in size, large reads will force the buffer to grow, completely circumventing the logic of `allocHandle`. This seems like it could break `io.netty.channel.RecvByteBufAllocator.Handle#continueReading` since that method for the fixed-size allocator does check whether the last read was equal to the attempted read size. So if we set `64k` because that's what the buffer size is, then wirte `1M` to the buffer we will stop reading on the IO loop, even though the channel may still have bytes that we can read right away. More imporatantly though, this can lead to running OOM quite easily under IO pressure as we are forcing the heap buffers passed to the read to `reallocate`. Closes elastic#49699
original-brownbear
added a commit
that referenced
this pull request
Dec 4, 2019
* Stop Allocating Buffers in CopyBytesSocketChannel (#49825) The way things currently work, we read up to 1M from the channel and then potentially force all of it into the `ByteBuf` passed by Netty. Since that `ByteBuf` tends to by default be `64k` in size, large reads will force the buffer to grow, completely circumventing the logic of `allocHandle`. This seems like it could break `io.netty.channel.RecvByteBufAllocator.Handle#continueReading` since that method for the fixed-size allocator does check whether the last read was equal to the attempted read size. So if we set `64k` because that's what the buffer size is, then wirte `1M` to the buffer we will stop reading on the IO loop, even though the channel may still have bytes that we can read right away. More imporatantly though, this can lead to running OOM quite easily under IO pressure as we are forcing the heap buffers passed to the read to `reallocate`. Closes #49699
SivagurunathanV
pushed a commit
to SivagurunathanV/elasticsearch
that referenced
this pull request
Jan 23, 2020
The way things currently work, we read up to 1M from the channel and then potentially force all of it into the `ByteBuf` passed by Netty. Since that `ByteBuf` tends to by default be `64k` in size, large reads will force the buffer to grow, completely circumventing the logic of `allocHandle`. This seems like it could break `io.netty.channel.RecvByteBufAllocator.Handle#continueReading` since that method for the fixed-size allocator does check whether the last read was equal to the attempted read size. So if we set `64k` because that's what the buffer size is, then wirte `1M` to the buffer we will stop reading on the IO loop, even though the channel may still have bytes that we can read right away. More imporatantly though, this can lead to running OOM quite easily under IO pressure as we are forcing the heap buffers passed to the read to `reallocate`. Closes elastic#49699
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed/Network
Http and internode communication implementations
>non-issue
v7.6.0
v8.0.0-alpha1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Marked as draft because this is rather an illustration of the issue than the fix I'd like to see here.
The problem with this fix is that it effectively moves us to a
64k
read buffer instead of a1M
read buffer (with default settings). This may be a price we can pay for the positive memory effects of reading in smaller chunks without allocations on the IO loop, but it's not great.The way things currently work, we read up to 1M from the channel
and then potentially force all of it into the
ByteBuf
passedby Netty. Since that
ByteBuf
tends to by default be64k
in size,large reads will force the buffer to grow, completely circumventing
the logic of
allocHandle
causing two problems:(this one could simply be fixed by setting the number of bytes in
ioBuffer
as the attempted read size if it's larger than the capacity of theByteBuf
... ) This seems like it could breakio.netty.channel.RecvByteBufAllocator.Handle#continueReading
since that method for the fixed-size allocator does check
whether the last read was equal to the attempted read size.
So if we set
64k
because that's what the buffer size is,then wirte
1M
to the buffer we will stop reading on the IO loop,even though the channel may still have bytes that we can read right away.
More imporatantly though, this can lead to running OOM quite easily
under IO pressure as we are forcing the heap buffers passed to the read
to
reallocate
. Which with the current default chunk size of 16M means potentially allocating16M
of buffers without the circuit breaker knowing about it simlply to read ahead from the channel (when reading messages one-by-one would have cost zero additions to the pool).Relates #49699