-
Notifications
You must be signed in to change notification settings - Fork 977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge Memory Usage - DotNetty.Buffers.HeapArena #174
Comments
There is a number of parameters that affect pooled buffer allocator behavior. Basically, there is a number of knobs you can tweak: number of heap arenas, size of arena (controlled by DEFAULT_MAX_ORDER and DEFAULT_PAGE_SIZE). Try adjusting these to lower max memory consumption. |
Awesome. Thanks Nayato. I'm trying to find some examples of adjusting those tweaks. Do you have any references that could prove a path forward? **Another edit. I am comparing the two memory snapshots, and the Unpooled is way lower, but still pretty high up there. Seems after multiple sending of files, it does recycle this memory, but after the sending of the file I still have a high amount of data in the HeapArena still (16 Million bytes) |
Closing. Feel free to reopen if need be. |
I'm currently struggeling with the same issues as described by aconite33. I set up a test tcp server with basic handlers and a test client sending a few thousand messages of different sizes (around 100-500kb each). After a short time i run in an out of memory exception which is thrown at "DotNetty.Buffers.HeapArena.NewChunk(Int32 pageSize, Int32 maxOrder, Int32 pageShifts, Int32 chunkSize)" I also changed it over to the UnpooledbyteBufferAllocator, but it is always the same behaviour. |
@Joooooooooogi one option is that you're genuinely exhausting memory. Are you releasing buffers? are those buffers sitting somewhere in a queue (e.g. channel's outbound buffer)? best bet is to take a process dump and trace where buffers are referenced to understand where ultimately memory is not freed. |
@nayato Thanks for your fast response - I think the high frequency of messages was also one of the issues I ran into. Using the unpooled buffer and slowing down the sending part, the memory consumption seems to be stable. I think that the out of memory exception is also caused by the multiple threads is use, which are all accessing the same channel via asyncWriteFlush. It seems like the server-side sometimes gets kind of "out-of-sync" ... in this case my ReplayingDecoder stucks at one of the decoding states and memory fills up. I allready put a lock on the client's write to channel method, but I think sometimes things get messed up right here. Will I Need to implement my own kind of protocol which provides an ACK Signal to the client to continue with sending the next message? or is this in some way handled inside of netty? |
When transferring large amount of data (e.g., sending a file), I'm seeing that DotNetty consumes a huge amount of memory. After the data is sent, the buffers don't seem to release the memory, and I'm left with an application that has a huge memory presence.
Take a snapshot of my program after I've transferred a file results in the screenshot below.
Is this an issue with DotNetty or am I doing something where I'm not releasing resources appropriately?
The text was updated successfully, but these errors were encountered: