New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OutOfDirectMemoryError/large number of PooledUnsafeDirectByteBuf allocated #6343
Comments
Please update to the latest Netty (4.1.8) and report back if you can still reproduce. We retain the objects from a the |
You can disable the recycler with |
I can observe the same behaviour using latest Netty (4.1.8). I also tested the Recycler options, but they had no noticable effect on my problem. To reproduce the behaviour I created a small test project. Running the "testMultipleClients"-Test and using some well placed breakpoints (e.g. io.netty.util.internal.PlatformDependent.incrementMemoryCounter(int)) I can determine that for each new client connection a new PooledUnsafeDirectByteBuf is reserved - up to a limit of 9 on my local machine. Rifling through the Netty code I couldn't quite figure out which factors/properties lead to that particular number (9) of PooledUnsafeDirectByteBuf. In any case, in our real application scenario the number of reserved PooledUnsafeDirectByteBuf grows supposedly beyond 64 - although none of the clients put particular load onto the server (after all the error happens when the system is just starting up). Due to that rather high number of buffers, we run into the OutOfDirectMemoryError originally reported. With 1GB of MaxDirectMemory 64*16MB buffers just fit, hence the conclusion that > 64 must have been tried to have been reserved (and none freed). What I can't figure out:
To add to the last point - once all is started normally I can put quite some pressure onto the server application (fo instance using tcpkali) with a sizable number of clients (> 100) sending a sizable number of messages/second (>100) without encountering this particular issue (or any issue at all). |
thanks for reporting back and the reproducer @lkoe ... we will investigate and get back to you |
@lkoe this is most likely because of the default configuration of
Hope this helps. |
so,what is the conclusion? I encountered the same problem. @lkoe |
@Viyond no real conclusion. In the end we tweaked some parameters to make the immediate exception go away in our environments. We set those JVM params: |
Is there a tool which can print what all is allocated on Direct Memory? For My case with 1 G DirectMemory, I am getting similar exception. I am using above settings io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 8388608 byte(s) of direct memory (used: 1023410176, max: 1029177344)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:506)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:460)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:690) My machine has 32 cores. |
@normanmaurer Could this issue have something to do with an unreleased slice of that incoming byteBuf?
|
@konsultaner actually you would need to call |
We had the same issue for months now together with the async MongoDB driver and with activated SSL.
finally helped. Thank you very much! I attached my stack trace. In case you're interested I can provide you with a sample project.
netty version 4.1.22.Final |
We're using Netty via Apache Camel (camel-netty4). The component acts as server and is only supposed to read messages from clients.
Wenn starting our system a number of clients (around 20-25) connect to the TCP server. During that process we started to experience issues with available direct memory being exhausted quickly.
[io.netty.util.internal.OutOfDirectMemoryError - failed to allocate 16777216 byte(s) of direct memory (used: 469762048, max: 477626368)]io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 469762048, max: 477626368) at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:624) at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:578) at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:686) at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:675) at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237) at io.netty.buffer.PoolArena.allocate(PoolArena.java:221) at io.netty.buffer.PoolArena.allocate(PoolArena.java:141) at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:131) at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) at io.netty.channel.socket.nio.NioDatagramChannel.doReadMessages(NioDatagramChannel.java:242) at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:75) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:610) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:551) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:465) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:437) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) at java.lang.Thread.run(Thread.java:745)
After tinkering with JVM and Netty parameters (e.g. "-XX:MaxDirectMemorySize=1G -Dio.netty.allocator.pageSize=8192 -Dio.netty.allocator.maxOrder=10") the whole shebang can start successfully.
However, I am very sceptical about the amount of direct memory netty allocates. When pulling a heap dump of the server application I can see that there are 63 (!) instances of PooledUnsafeDirectByteBuf held in memory.
Even after extended no-load periods none of those PooledUnsafeDirectByteBuf ever seem to be released.
Activating leak detection ("-Dio.netty.leakDetectionLevel=paranoid") yielded nothing.
Am I missing something very obvious? Is this behaviour expected? What is the recommended way to cope with this?
Netty version
4.1.5
JVM version (e.g.
java -version
)Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
OS version (e.g.
uname -a
)Linux *** 3.12.59-60.45-default #1 SMP Sat Jun 25 06:19:03 UTC 2016 (396c69d) x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: