Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chunk direct buffer usage by networking layer #7811

Closed
wants to merge 3 commits into from

Conversation

Projects
None yet
4 participants
@kimchy
Copy link
Member

kimchy commented Sep 21, 2014

Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
See the doc on NettyUtils#DEFAULT_GATHERING for more info.

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes #7811
* <p/>
* Note, on the read size of netty, it uses a single direct buffer that is defined in both the transport
* and http configuration (based on the direct memory available), and the upstream handlers (SizeHeaderFrameDecoder,
* or more specifically the FrameDecoder base class) makes sure to use a cumolation buffer and not copy it

This comment has been minimized.

Copy link
@nik9000

nik9000 Sep 22, 2014

Contributor

s/cumolation/accumulation/ ?

This comment has been minimized.

Copy link
@kimchy

kimchy Sep 22, 2014

Author Member

its actually named cumulation buffer in netty, will do the s/o/u

@@ -53,25 +95,12 @@ public InternalLogger newInstance(String name) {
});

ThreadRenamingRunnable.setThreadNameDeterminer(ES_THREAD_NAME_DETERMINER);

DEFAULT_GATHERING = Booleans.parseBoolean(System.getProperty("es.netty.gathering"), true);

This comment has been minimized.

Copy link
@s1monw

s1monw Sep 22, 2014

Contributor

any chance we can randomize this?

* <p/>
* When using the socket or file channel API to write or read using heap ByteBuffer, the sun.nio
* package will convert it to a direct buffer before doing the actual operation. The direct buffer is
* cached on an array of buffers under the nio.ch.Util$BufferCache un a thread local.

This comment has been minimized.

Copy link
@s1monw

s1monw Sep 22, 2014

Contributor

s/un a thread/in a thread/

* SocketSendBufferPool#DEFAULT_PREALLOCATION_SIZE (64kb), it will just convert the ChannelBuffer
* to a ByteBuffer and send it. The problem is, that then same size DirectByteBuffer will be
* allocated (or reused) and kept around on a thread local in the sun.nio BufferCache. If very
* large buffer is sent, imagine a 10mb one, then a 10mb direct buffer will be allocated as an

This comment has been minimized.

Copy link
@s1monw

s1monw Sep 22, 2014

Contributor

what could possibly be wrong with this :)

* channel buffer is composite, it will use the correct gathering flag. See more
* at {@link NettyUtils#DEFAULT_GATHERING}.
*/
public class XHttpResponseEncoder extends HttpResponseEncoder {

This comment has been minimized.

Copy link
@s1monw

s1monw Sep 22, 2014

Contributor

is this class intended to go away ? if not I'd call it EsHttpResponseEncoder

This comment has been minimized.

Copy link
@kimchy

kimchy Sep 23, 2014

Author Member

will change

* Validates that all the thread local allocated ByteBuffer in sun.nio under the Util$BufferCache
* are not greater than 1mb.
*/
private void validateNoLargeDirectBufferAllocated() throws Exception {

This comment has been minimized.

Copy link
@s1monw

s1monw Sep 22, 2014

Contributor

nice!

@s1monw

This comment has been minimized.

Copy link
Contributor

s1monw commented Sep 22, 2014

I left some comments but in general LGTM

@s1monw

This comment has been minimized.

Copy link
Contributor

s1monw commented Sep 23, 2014

LGTM - ran tests on java 8 and java 7 +1 to push

@s1monw s1monw removed the review label Sep 23, 2014

@kimchy kimchy closed this in d4d77cd Sep 23, 2014

kimchy added a commit that referenced this pull request Sep 23, 2014

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes #7811

kimchy added a commit that referenced this pull request Sep 23, 2014

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes #7811

kimchy added a commit that referenced this pull request Sep 23, 2014

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes #7811

@kimchy kimchy deleted the kimchy:netty_gathering branch Sep 23, 2014

@clintongormley clintongormley changed the title Chunk direct buffer usage by networking layer Internal: Chunk direct buffer usage by networking layer Sep 26, 2014

@bleskes bleskes referenced this pull request Nov 3, 2014

Closed

Shard UNASSIGNED #8326

@clintongormley clintongormley changed the title Internal: Chunk direct buffer usage by networking layer Chunk direct buffer usage by networking layer Jun 7, 2015

mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes elastic#7811

mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015

Chunk direct buffer usage by networking layer
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
  This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
  This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
  See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes elastic#7811
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.