Skip to content

ElasticSearch: Out of Memory/Unable to create new native thread #253

Closed
hariharshankar opened this Issue May 2, 2013 · 4 comments

3 participants

@hariharshankar

In a titan/cassandra/elasticsearch instance, I am trying to start multiple processes that would write to the graph simultaneously and I get the exception below.
Elasticsearch is running with -Xmx15G -Xms15G.
Each of the write process is running at 250M each, and this exception is thrown when opening the third process itself.

[marko@p] $ bin/batch-write.sh 
13/05/02 12:24:00 INFO diskstorage.Backend: Configuring index [search] based on: 
 backend: elasticsearch
hostname: 127.0.0.1
client-only: true

13/05/02 12:24:01 INFO elasticsearch.plugins: [Kid Nova] loaded [], sites []
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:265)
        at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:236)
        at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:97)
        at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:411)
        at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:62)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
        at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
        at gov.lanl.egosystem.GraphHolder.getGraph(GraphHolder.groovy:26)
        at gov.lanl.egosystem.GraphHolder$getGraph.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
        at gov.lanl.egosystem.discovery.utils.BatchTwitter.main(BatchTwitter.groovy:22)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:254)
        ... 15 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
        at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:240)
        at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
        at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
        at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
        at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:179)
        at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:119)
        at com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:136)
        ... 20 more
@mbroecheler
Aurelius member
@okram okram was assigned May 3, 2013
@okram
Aurelius member
okram commented May 6, 2013

Harish will be testing this today and will report back.

@hariharshankar

As Matthais mentioned, increasing the number of threads per process seems to have solved the problem.
The ulimit -u value was increased from the default 1024 to 1024*10.

@okram
Aurelius member
okram commented May 7, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.