Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory error with elasticsearch-2.0.0-rc1 on windows server 2012 #14312

Closed
raphaele81 opened this issue Oct 27, 2015 · 1 comment
Closed

Comments

@raphaele81
Copy link

I'm currently evaluating ELK for my company. I played successfully with logstash-2.0.0-betaX and elasticsearch-2.0.0-betaX and upgraded to elasticsearch-2.0.0-rc1 a few days ago.

Since then, I get a "java.lang.OutOfMemoryError: GC overhead limit exceeded" after a few minutes of elasticsearch running.

Here is my setup:

  • ELK running on windows server 2012 with 8GB Ram
  • logstash-2.0.0-beta3 indexing log files (accessible through network shared file) and logs in oracle databases (logstash-input-jdbc)
  • elasticsearch-2.0.0-rc1 with shield plugin

elasticsearch.yml configuration major changes:

threadpool.index.queue_size: 1000
threadpool.search.queue_size: 10000
threadpool.bulk.queue_size: 100

ES_HEAP_SIZE

set ES_MIN_MEM=2g
set ES_MAX_MEM=2g

The exact same set up with elasticsearch-2.0.0-beta1 works fine.

Here is the complete stacktrace:

[2015-10-27 15:07:15,107][INFO ][monitor.jvm              ] [node_test_dc_1] [gc][old][489][24] duration [5.3s], collections [1]/[5.4s], total [5.3s]/[1.5m], memory [1.5gb]->[1.5gb]/[1.9gb], all_pools {[young] [243.1mb]->[245.5mb]/[268.5mb]}{[survivor] [0b]->[0b]/[205mb]}{[old] [1.3gb]->[1.3gb]/[1.3gb]}
[2015-10-27 15:09:11,562][INFO ][monitor.jvm              ] [node_test_dc_1] [gc][old][521][62] duration [7s], collections [1]/[7s], total [7s]/[3.4m], memory [1.5gb]->[1.5gb]/[1.9gb], all_pools {[young] [258.3mb]->[259.7mb]/[268.5mb]}{[survivor] [0b]->[0b]/[205mb]}{[old] [1.3gb]->[1.3gb]/[1.3gb]}
[2015-10-27 15:11:52,501][WARN ][index.engine             ] [node_test_dc_1] [etl-2015.10.14][0] Failed to close SearcherManager
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1563)
    at java.lang.Class.checkPackageAccess(Class.java:2372)
    at java.lang.Class.checkMemberAccess(Class.java:2351)
    at java.lang.Class.getMethod(Class.java:1783)
    at org.apache.lucene.store.MMapDirectory$2$1.run(MMapDirectory.java:289)
    at org.apache.lucene.store.MMapDirectory$2$1.run(MMapDirectory.java:286)
    at java.security.AccessController.doPrivileged(Native Method)
    at org.apache.lucene.store.MMapDirectory$2.freeBuffer(MMapDirectory.java:286)
    at org.apache.lucene.store.ByteBufferIndexInput.freeBuffer(ByteBufferIndexInput.java:378)
    at org.apache.lucene.store.ByteBufferIndexInput.close(ByteBufferIndexInput.java:357)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
    at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.close(Lucene50CompoundReader.java:120)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
    at org.apache.lucene.index.SegmentCoreReaders.decRef(SegmentCoreReaders.java:152)
    at org.apache.lucene.index.SegmentReader.doClose(SegmentReader.java:169)
    at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)
    at org.apache.lucene.index.StandardDirectoryReader.doClose(StandardDirectoryReader.java:359)
    at org.apache.lucene.index.FilterDirectoryReader.doClose(FilterDirectoryReader.java:134)
    at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:253)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:130)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)
    at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:274)
    at org.apache.lucene.search.ReferenceManager.swapReference(ReferenceManager.java:62)
    at org.apache.lucene.search.ReferenceManager.close(ReferenceManager.java:146)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:96)
    at org.apache.lucene.util.IOUtils.close(IOUtils.java:83)
    at org.elasticsearch.index.engine.InternalEngine.closeNoLock(InternalEngine.java:954)
    at org.elasticsearch.index.engine.Engine.failEngine(Engine.java:517)
    at org.elasticsearch.index.engine.Engine.maybeFailEngine(Engine.java:556)
    at org.elasticsearch.index.engine.InternalEngine.maybeFailEngine(InternalEngine.java:886)

Heap dump analyze:
es_leak_suspects

@rmuir
Copy link
Contributor

rmuir commented Oct 27, 2015

Hi, thanks for testing. this is an issue with that RC with filterreaders not evicting properly, fixed by:

@rmuir rmuir closed this as completed Oct 27, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants