New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunk direct buffer usage by networking layer #7811
Conversation
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes elastic#7811
88914ec
to
ecf8cee
Compare
* <p/> | ||
* Note, on the read size of netty, it uses a single direct buffer that is defined in both the transport | ||
* and http configuration (based on the direct memory available), and the upstream handlers (SizeHeaderFrameDecoder, | ||
* or more specifically the FrameDecoder base class) makes sure to use a cumolation buffer and not copy it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/cumolation/accumulation/ ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its actually named cumulation
buffer in netty, will do the s/o/u
@@ -53,25 +95,12 @@ public InternalLogger newInstance(String name) { | |||
}); | |||
|
|||
ThreadRenamingRunnable.setThreadNameDeterminer(ES_THREAD_NAME_DETERMINER); | |||
|
|||
DEFAULT_GATHERING = Booleans.parseBoolean(System.getProperty("es.netty.gathering"), true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any chance we can randomize this?
I left some comments but in general LGTM |
LGTM - ran tests on java 8 and java 7 +1 to push |
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes #7811
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes #7811
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes #7811
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes elastic#7811
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent). This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM. This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages. See the doc on NettyUtils#DEFAULT_GATHERING for more info. closes elastic#7811
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
See the doc on NettyUtils#DEFAULT_GATHERING for more info.