Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deflater/Inflater memory leak with WebSocket permessage-deflate extension #293

Closed
jmcc0nn3ll opened this issue Feb 16, 2016 · 3 comments
Closed
Assignees
Labels
Bug For general bugs on Jetty side

Comments

@jmcc0nn3ll
Copy link
Contributor

migrated from Bugzilla #487197
status ASSIGNED severity major in component websocket for 9.3.x
Reported in version 9.3.7 on platform PC
Assigned to: Joakim Erdfelt

Original attachment names and IDs:

On 2016-02-04 04:44:22 -0500, Rafael Campos wrote:

Created attachment 259559
Heap usage during the leak

It seems that there is a non-heap memory leak when connection to Websocket is established using the permessage-deflate extension. I've tried connecting and disconnecting 1000 websockets again and again, without sending or receiving anything, and the RSS size of the process grows from 3.8G (I'm using -Xmx3072m -Xms3072m -XX:+UseConcMarkSweepGC) up to all the memory available in the system (I've seen up to 7.3G), but the heap remains at around 1G or 1.5G (see attached file).

If I repeat the same test without enabling the compression in the client (not sending the permessage-deflate extension header), there is no such leak and the memory remains stable.

With compression enabled, each bunch of 1k sockets will consume around 70/80M extra of memory that is not freed after they are disconnected, even forcing GC with JConsole or calling to the method System.runFinalization(); to ensure thay the finalize() methods of the deflaters/inflaters are called.

I think the offending lines are in the CompressExtension class, as if I replace the following lines in the constructor:

    deflater = new Deflater(Deflater.DEFAULT_COMPRESSION,NOWRAP);
    inflater = new Inflater(NOWRAP);

By:

    deflater = null;
    inflater = null;

There is no such leak in the connection but obviously I can't send/receiving anything as I will only get NPEs.

On 2016-02-04 12:20:48 -0500, Joakim Erdfelt wrote:

Been analyzing this via jmap, dumped hprofs, and Eclipse MemoryAnalyzer.

The WebSocket session, and extensions are being cleaned up.
However, it seems the Deflater and Inflater instances are getting hung up in their own java.lang.ref.Finalizer (boggle)?

Going to try to either be explicit with .end() or possibly introduce Deflater and Inflater object pools (grumble)

On 2016-02-04 12:40:12 -0500, Rafael Campos wrote:

Thanks Joakim.

I am working in a workaround with a pool of Deflaters/Inflaters and the initial tests look promising. At least, the RSS memory is not increasing forever. If you want, I can attach a patch tomorrow so you can test and evaluate it.

The only issue that concerns me is that right now I have it "capped" so I create as much X instances. That works for us, as we know the number of users of our servers, but I don´t now if it is reasonable as a general solution.

On 2016-02-04 13:04:18 -0500, Joakim Erdfelt wrote:

Seems we are getting caught up in 2 JVM bugs.

http://bugs.java.com/view_bug.do?bug_id=4797189
http://bugs.java.com/view_bug.do?bug_id=6293787

The recommendations are to call .end() on each Deflater/Inflater that is instantiated.

I've implemented that in [jetty-9.3.x] branch just now.

On 2016-02-04 13:05:56 -0500, Joakim Erdfelt wrote:

See commit [jetty-9.3.x cfe823a].

Please give it a try.

On 2016-02-04 13:07:12 -0500, Joakim Erdfelt wrote:

Link to commit

http://git.eclipse.org/c/jetty/org.eclipse.jetty.project.git/commit/?h=jetty-9.3.x&id=SHA: cfe823a

On 2016-02-04 14:16:18 -0500, Rafael Campos wrote:

I will test that tomorrow ASAP and will report the results, as right now I have no access to the code.

BTW, I was debugging and it seems that the garbage collector invokes (at some point in time...) the method finalize() which leads to a call to the end() method, but the leak was still there. Maybe at this point the Deflater/Inflater has lost some internal references and that is causing the leak and calling it "manually" instead of waiting for the GC to finalize the object fixes the issue.

On 2016-02-05 06:43:16 -0500, Rafael Campos wrote:

I have been testing this but it doesn't seems to work. The leak is not so obvious now as it doesn't start until we start sending/receiving data (I think due to the inflater/deflater instantiation is now delayed) but again I've been able to raise the RSS size of the JVM up to 5G with just 3G of heap.

It seems that the JVM itself doesn't know about this extra memory as addint the option -XX:NativeMemoryTracking=detail to the JVM and setting a baseline at some point the increment of the memory is not consistent with the increase in the actual size of the JVM in the system.

Also, taking a look to /proc/{pid}/smaps I can see several blocks as the following one:

7f1aa0000000-7f1aa4000000 rw-p 00000000 00:00 0
Size: 65536 kB
Rss: 64136 kB
Pss: 64136 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 64136 kB
Referenced: 64136 kB
Anonymous: 64136 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB

And taking a look to the memory in that position (gdb --batch --pid {pid} -ex "dump memory test 0x7f1aa0000000 0x7f1aa4000000") I can see the messages that have been sent through websocket with a hexadecimal editor, so I guess this is the memory leaked by the inflaters/deflaters.

@jmcc0nn3ll jmcc0nn3ll added the Bug For general bugs on Jetty side label Feb 16, 2016
@joakime joakime self-assigned this Feb 16, 2016
@joakime
Copy link
Contributor

joakime commented Feb 17, 2016

Bug #300 was opened to attempt to address the Deflater / Inflater memory leak issue.
Will keep this one open until such time as that Object Pool has been integrated and proves successful.

@hitenny
Copy link

hitenny commented Aug 18, 2017

Is there any update on this issue? We are also facing massive memory leaks due to this. We disabled websocket compression by unregistering the permessage-deflate extension. It has solved the memory leaks, but it caused slowness in sending data to websocket remote endpoint and resulted in production issues.

@gregw
Copy link
Contributor

gregw commented Jul 18, 2018

closing in favour of #300

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side
Projects
No open projects
Development

No branches or pull requests

4 participants