Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OutOfMemoryError: Direct buffer memory when using EPOLL transport #4275

Closed
myroch opened this issue Sep 25, 2015 · 9 comments
Closed

OutOfMemoryError: Direct buffer memory when using EPOLL transport #4275

myroch opened this issue Sep 25, 2015 · 9 comments
Assignees
Labels
Milestone

Comments

@myroch
Copy link

myroch commented Sep 25, 2015

Netty version: 4.0.31.Final

Context:
I encountered an exception when using EPOLL transport on my server. Other transports are not affected. Removing MaxDirectMemorySize JVM option won't resolve the issue (it will just take longer to get it). I'm using simple extended ByteToMessageDecoder which just checks for enough of bytes and creates my proprietary Object from them. Another handler in flow consumes this object. So I'm basically not dealing with ByteBuf allocations/releases, I'm just reading from it in my ByteToMessageDecoder. The server just reads, it doesn't write anything back.

io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct buffer memory
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:234)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:803)
        at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:346)
        at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:658)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
        at io.netty.buffer.UnpooledUnsafeDirectByteBuf.allocateDirect(UnpooledUnsafeDirectByteBuf.java:108)
        at io.netty.buffer.UnpooledUnsafeDirectByteBuf.capacity(UnpooledUnsafeDirectByteBuf.java:157)
        at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849)
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841)
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831)
        at io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:92)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:228)
        ... 9 more

Steps to reproduce:

  1. Write simple extended ByteToMessageDecoder which is just reading from ByteBuf when there is enough of bytes
  2. Setup EPOLL based server with this handler
  3. Play single very big file using netcat into this server
$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

Operating system: Centos 6 64-bit

$ uname -a
Linux infinity 2.6.32-504.16.2.el6.x86_64 #1 SMP Wed Apr 22 06:48:29 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ echo $JAVA_OPTS
JAVA_OPTS="-Xms2g -Xmx2g -XX:MaxDirectMemorySize=512m -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=15445 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:+HeapDumpOnOutOfMemoryError"

My system has IPv6 disabled.

@normanmaurer
Copy link
Member

@myroch to make it easy can you please share your ByteToMessageDecoder implementation and the ChannelPipeline setup ?

@myroch
Copy link
Author

myroch commented Sep 25, 2015

tested also with JDK 8, it doesn't resolve the issue

java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

@myroch
Copy link
Author

myroch commented Sep 25, 2015

    private class MyDecoder extends ByteToMessageDecoder {

        private static final char SEPARATOR = ',' as char

        @Override
        protected void decode(ChannelHandlerContext ctx, ByteBuf buf, List<Object> out) {
            // we need at least byte to read actual type with length
            int readable = buf.readableBytes()
            if (readable < 1) {
                return
            }
            byte length = buf.getByte(buf.readerIndex())

            if (readable < length + 1) {
                // wait for more data
                return
            }
            // there's enough bytes in the buffer - read it (and skip length, as we have already read it
            buf.skipBytes(1)

            StringBuilder result = new StringBuilder(length + 10)
            result.append(length).append(SEPARATOR)
            int a = buf.readByte()
            result.append(a).append(SEPARATOR)

            long b = buf.readLong()
            result.append(b).append(SEPARATOR)

            int c = buf.readUnsignedShort()
            result.append(c).append(SEPARATOR)

            byte[] bytes = new byte[16]
            buf.readBytes(bytes, 0, 4)

            int d = buf.readUnsignedShort()
            result.append(d).append(SEPARATOR)

            buf.readBytes(bytes, 0, 16)

            out.add(result.toString())
        }
    }

@normanmaurer
Copy link
Member

@myroch can you do me a favour and test if using LT mode will fix this:

ServerBootstrap sb = ...
sb.childOptions(EpollChannelOption.EPOLL_MODE, EpollMode.LEVEL_TRIGGERED);
....

@myroch
Copy link
Author

myroch commented Sep 25, 2015

pipeline is very simple:

ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("myDecoder", new MyDecoder());
pipeline.addLast("myConsumer", new MyConsumer());

myConsumer can just be ChannelInboundHandlerAdapter with empty channelRead method

@myroch
Copy link
Author

myroch commented Sep 25, 2015

@normanmaurer it seems to be stable like never before 👍

@normanmaurer
Copy link
Member

@myroch awesome... I think this gives me a good idea... Stay tuned.

@normanmaurer normanmaurer self-assigned this Sep 25, 2015
@normanmaurer normanmaurer added this to the 4.0.32.Final milestone Sep 25, 2015
normanmaurer added a commit that referenced this issue Sep 25, 2015
Motivation:

If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.

Modifications:

- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.

Result:

No risk of OOME.
@normanmaurer
Copy link
Member

@myroch can you check #4281 and see if it fixes it when using epoll without level triggered mode ?

@myroch
Copy link
Author

myroch commented Sep 27, 2015

@normanmaurer confirmed, works as charm, thx!

@myroch myroch closed this as completed Sep 28, 2015
normanmaurer added a commit that referenced this issue Sep 29, 2015
Motivation:

If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.

Modifications:

- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.

Result:

No risk of OOME.
normanmaurer added a commit that referenced this issue Sep 29, 2015
Motivation:

If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.

Modifications:

- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.

Result:

No risk of OOME.
normanmaurer added a commit that referenced this issue Sep 29, 2015
Motivation:

If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.

Modifications:

- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.

Result:

No risk of OOME.
pulllock pushed a commit to pulllock/netty that referenced this issue Oct 19, 2023
Motivation:

If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.

Modifications:

- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.

Result:

No risk of OOME.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants