-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OutOfMemoryError: Direct buffer memory when using EPOLL transport #4275
Comments
@myroch to make it easy can you please share your ByteToMessageDecoder implementation and the ChannelPipeline setup ? |
tested also with JDK 8, it doesn't resolve the issue
|
private class MyDecoder extends ByteToMessageDecoder {
private static final char SEPARATOR = ',' as char
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf buf, List<Object> out) {
// we need at least byte to read actual type with length
int readable = buf.readableBytes()
if (readable < 1) {
return
}
byte length = buf.getByte(buf.readerIndex())
if (readable < length + 1) {
// wait for more data
return
}
// there's enough bytes in the buffer - read it (and skip length, as we have already read it
buf.skipBytes(1)
StringBuilder result = new StringBuilder(length + 10)
result.append(length).append(SEPARATOR)
int a = buf.readByte()
result.append(a).append(SEPARATOR)
long b = buf.readLong()
result.append(b).append(SEPARATOR)
int c = buf.readUnsignedShort()
result.append(c).append(SEPARATOR)
byte[] bytes = new byte[16]
buf.readBytes(bytes, 0, 4)
int d = buf.readUnsignedShort()
result.append(d).append(SEPARATOR)
buf.readBytes(bytes, 0, 16)
out.add(result.toString())
}
} |
@myroch can you do me a favour and test if using LT mode will fix this: ServerBootstrap sb = ...
sb.childOptions(EpollChannelOption.EPOLL_MODE, EpollMode.LEVEL_TRIGGERED);
.... |
pipeline is very simple: ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("myDecoder", new MyDecoder());
pipeline.addLast("myConsumer", new MyConsumer()); myConsumer can just be ChannelInboundHandlerAdapter with empty channelRead method |
@normanmaurer it seems to be stable like never before 👍 |
@myroch awesome... I think this gives me a good idea... Stay tuned. |
Motivation: If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder. Modifications: - Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes - Use default value of 16 for max reads. Result: No risk of OOME.
@normanmaurer confirmed, works as charm, thx! |
Motivation: If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder. Modifications: - Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes - Use default value of 16 for max reads. Result: No risk of OOME.
Motivation: If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder. Modifications: - Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes - Use default value of 16 for max reads. Result: No risk of OOME.
Motivation: If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder. Modifications: - Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes - Use default value of 16 for max reads. Result: No risk of OOME.
Motivation: If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder. Modifications: - Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes - Use default value of 16 for max reads. Result: No risk of OOME.
Netty version: 4.0.31.Final
Context:
I encountered an exception when using EPOLL transport on my server. Other transports are not affected. Removing MaxDirectMemorySize JVM option won't resolve the issue (it will just take longer to get it). I'm using simple extended ByteToMessageDecoder which just checks for enough of bytes and creates my proprietary Object from them. Another handler in flow consumes this object. So I'm basically not dealing with ByteBuf allocations/releases, I'm just reading from it in my ByteToMessageDecoder. The server just reads, it doesn't write anything back.
Steps to reproduce:
Operating system: Centos 6 64-bit
My system has IPv6 disabled.
The text was updated successfully, but these errors were encountered: