You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having problems with a memory issue while using Logstash which I've tracked down to a large number of com.fasterxml.jackson.databind.modules.SimpleSerializers instances hanging around in memory.
Looking at the code in JrJacksonBase.java (and assuming I don't have a custom date format, since I haven't explicitly set one anywhere, I'm just using Logstash but no date format mentioned in config or environment variables), does the code need to create a new provider every time if we're using the marker instance? Couldn't it cache that provider and use it if we're going to use RDF?
Hopefully you don't mind me creating the issue here. I figured since it's JSON serialization and it's come from two places it would be worth starting here and I can close this issue and shift it somewhere else if we find it shouldn't be here.
I've still got a broken instance running with a profiling agent available if you need more information. The symptoms I see are base memory usage (after old generation garbage collection) gradually increasing until Logstash spends all its time in garbage collection.
The text was updated successfully, but these errors were encountered:
neilprosser
changed the title
Possible memory leak while serializing dates?
Possible memory issue while serializing dates?
Jan 14, 2016
Sorry, I amended the issue title because I don't think this is a leak. It looks like there are references to the classes created and they would get garbage collected but they're never not referenced by anything.
I'm having problems with a memory issue while using Logstash which I've tracked down to a large number of
com.fasterxml.jackson.databind.modules.SimpleSerializers
instances hanging around in memory.I've grabbed object allocation reports and it looks like JrJackson is creating these objects in https://github.com/guyboertje/jrjackson/blob/master/src/main/java/com/jrjackson/RubyJacksonModule.java#L56 which has been called from https://github.com/guyboertje/jrjackson/blob/master/src/main/java/com/jrjackson/JrJacksonBase.java#L72. I've put the allocation call tree in a gist (https://gist.github.com/neilprosser/3298ada15ea5637a01bf) (apologies for the size) so you can see where the calls are coming from (from what I can see it's the JSON serialization for the Elasticsearch output and Redis output).
Looking at the code in
JrJacksonBase.java
(and assuming I don't have a custom date format, since I haven't explicitly set one anywhere, I'm just using Logstash but no date format mentioned in config or environment variables), does the code need to create a new provider every time if we're using the marker instance? Couldn't it cache that provider and use it if we're going to useRDF
?Hopefully you don't mind me creating the issue here. I figured since it's JSON serialization and it's come from two places it would be worth starting here and I can close this issue and shift it somewhere else if we find it shouldn't be here.
I've still got a broken instance running with a profiling agent available if you need more information. The symptoms I see are base memory usage (after old generation garbage collection) gradually increasing until Logstash spends all its time in garbage collection.
The text was updated successfully, but these errors were encountered: