Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge Memory Usage - DotNetty.Buffers.HeapArena #174

Closed
aconite33 opened this issue Nov 16, 2016 · 6 comments
Closed

Huge Memory Usage - DotNetty.Buffers.HeapArena #174

aconite33 opened this issue Nov 16, 2016 · 6 comments
Labels

Comments

@aconite33
Copy link

When transferring large amount of data (e.g., sending a file), I'm seeing that DotNetty consumes a huge amount of memory. After the data is sent, the buffers don't seem to release the memory, and I'm left with an application that has a huge memory presence.

Take a snapshot of my program after I've transferred a file results in the screenshot below.
Is this an issue with DotNetty or am I doing something where I'm not releasing resources appropriately?
screen shot 2016-11-16 at 8 54 53 am

@nayato
Copy link
Member

nayato commented Nov 16, 2016

There is a number of parameters that affect pooled buffer allocator behavior. Basically, there is a number of knobs you can tweak: number of heap arenas, size of arena (controlled by DEFAULT_MAX_ORDER and DEFAULT_PAGE_SIZE). Try adjusting these to lower max memory consumption.
Easier option would be to switch to UnpooledByteBufferAllocator. It is a natural choice for client applications but I wouldn't recommend that if you have high perf scenarios.
Another option is to request buffers that won't be cached by definition (due to big size) if you can predict such situation.

@aconite33
Copy link
Author

aconite33 commented Nov 17, 2016

Awesome. Thanks Nayato. I'm trying to find some examples of adjusting those tweaks. Do you have any references that could prove a path forward?
Also I changed it over to the UnpooledbyteBufferAllocator:
bootstrap.Option(ChannelOption.Allocator, UnpooledByteBufferAllocator.Default);
But still is exploding my memory. I also notice that when I do WriteandFlushAsync it doesn't immediately send. Seems to take a bit before sending (In my case, reading the entire file before sending it, instead of in chunks)

**Another edit. I am comparing the two memory snapshots, and the Unpooled is way lower, but still pretty high up there. Seems after multiple sending of files, it does recycle this memory, but after the sending of the file I still have a high amount of data in the HeapArena still (16 Million bytes)

@nayato
Copy link
Member

nayato commented Nov 24, 2016

Closing. Feel free to reopen if need be.

@nayato nayato closed this as completed Nov 24, 2016
@Joooooooooogi
Copy link

I'm currently struggeling with the same issues as described by aconite33.

I set up a test tcp server with basic handlers and a test client sending a few thousand messages of different sizes (around 100-500kb each). After a short time i run in an out of memory exception which is thrown at "DotNetty.Buffers.HeapArena.NewChunk(Int32 pageSize, Int32 maxOrder, Int32 pageShifts, Int32 chunkSize)"

I also changed it over to the UnpooledbyteBufferAllocator, but it is always the same behaviour.
What was the solution in this case?

@nayato
Copy link
Member

nayato commented Mar 15, 2017

@Joooooooooogi one option is that you're genuinely exhausting memory. Are you releasing buffers? are those buffers sitting somewhere in a queue (e.g. channel's outbound buffer)? best bet is to take a process dump and trace where buffers are referenced to understand where ultimately memory is not freed.
I can easily get OOM exception if I try to send 600K messages 8KB each without throttling the sending part, i.e. all the messages get queued up for sending which lags behind and ultimately buffers sit in channel's outbound buffer.

@Joooooooooogi
Copy link

@nayato Thanks for your fast response - I think the high frequency of messages was also one of the issues I ran into. Using the unpooled buffer and slowing down the sending part, the memory consumption seems to be stable. I think that the out of memory exception is also caused by the multiple threads is use, which are all accessing the same channel via asyncWriteFlush. It seems like the server-side sometimes gets kind of "out-of-sync" ... in this case my ReplayingDecoder stucks at one of the decoding states and memory fills up. I allready put a lock on the client's write to channel method, but I think sometimes things get messed up right here.

Will I Need to implement my own kind of protocol which provides an ACK Signal to the client to continue with sending the next message? or is this in some way handled inside of netty?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants