ElasticSearch: Out of Memory/Unable to create new native thread #253

Closed
hariharshankar opened this Issue May 2, 2013 · 5 comments

Comments

Projects
None yet
4 participants
@hariharshankar

In a titan/cassandra/elasticsearch instance, I am trying to start multiple processes that would write to the graph simultaneously and I get the exception below.
Elasticsearch is running with -Xmx15G -Xms15G.
Each of the write process is running at 250M each, and this exception is thrown when opening the third process itself.

[marko@p] $ bin/batch-write.sh 
13/05/02 12:24:00 INFO diskstorage.Backend: Configuring index [search] based on: 
 backend: elasticsearch
hostname: 127.0.0.1
client-only: true

13/05/02 12:24:01 INFO elasticsearch.plugins: [Kid Nova] loaded [], sites []
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:265)
        at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:236)
        at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:97)
        at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:411)
        at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:62)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
        at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
        at gov.lanl.egosystem.GraphHolder.getGraph(GraphHolder.groovy:26)
        at gov.lanl.egosystem.GraphHolder$getGraph.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
        at gov.lanl.egosystem.discovery.utils.BatchTwitter.main(BatchTwitter.groovy:22)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:254)
        ... 15 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
        at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:240)
        at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
        at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
        at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
        at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:179)
        at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:119)
        at com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:136)
        ... 20 more
@mbroecheler

This comment has been minimized.

Show comment
Hide comment
@mbroecheler

mbroecheler May 2, 2013

Member

If I remember correctly, that exception does not actually mean that you
don't have enough memory but that the jvm cannot create more threads. Check
that your OS allows more than 1024 threads per process.

On Thu, May 2, 2013 at 11:51 AM, hariharshankar notifications@github.comwrote:

In a titan/cassandra/elasticsearch instance, I am trying to start multiple
processes that would write to the graph simultaneously and I get the
exception below.
Elasticsearch is running with -Xmx15G -Xms15G.
Each of the write process is running at 250M each, and this exception is
thrown when opening the third process itself.

[marko@p] $ bin/batch-write.sh
13/05/02 12:24:00 INFO diskstorage.Backend: Configuring index [search] based on:
backend: elasticsearch
hostname: 127.0.0.1
client-only: true

13/05/02 12:24:01 INFO elasticsearch.plugins: [Kid Nova] loaded [], sites []
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:265)
at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:236)
at com.thinkaurelius.titan.diskstorage.Backend.(Backend.java:97)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:411)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.(StandardTitanGraph.java:62)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at gov.lanl.egosystem.GraphHolder.getGraph(GraphHolder.groovy:26)
at gov.lanl.egosystem.GraphHolder$getGraph.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at gov.lanl.egosystem.discovery.utils.BatchTwitter.main(BatchTwitter.groovy:22)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:254)
... 15 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:95)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:51)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:240)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:179)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:119)
at com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.(ElasticSearchIndex.java:136)
... 20 more


Reply to this email directly or view it on GitHubhttps://github.com/thinkaurelius/titan/issues/253
.

Matthias Broecheler
http://www.matthiasb.com

Member

mbroecheler commented May 2, 2013

If I remember correctly, that exception does not actually mean that you
don't have enough memory but that the jvm cannot create more threads. Check
that your OS allows more than 1024 threads per process.

On Thu, May 2, 2013 at 11:51 AM, hariharshankar notifications@github.comwrote:

In a titan/cassandra/elasticsearch instance, I am trying to start multiple
processes that would write to the graph simultaneously and I get the
exception below.
Elasticsearch is running with -Xmx15G -Xms15G.
Each of the write process is running at 250M each, and this exception is
thrown when opening the third process itself.

[marko@p] $ bin/batch-write.sh
13/05/02 12:24:00 INFO diskstorage.Backend: Configuring index [search] based on:
backend: elasticsearch
hostname: 127.0.0.1
client-only: true

13/05/02 12:24:01 INFO elasticsearch.plugins: [Kid Nova] loaded [], sites []
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:265)
at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:236)
at com.thinkaurelius.titan.diskstorage.Backend.(Backend.java:97)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:411)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.(StandardTitanGraph.java:62)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at gov.lanl.egosystem.GraphHolder.getGraph(GraphHolder.groovy:26)
at gov.lanl.egosystem.GraphHolder$getGraph.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at gov.lanl.egosystem.discovery.utils.BatchTwitter.main(BatchTwitter.groovy:22)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:254)
... 15 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:95)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:51)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:240)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:179)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:119)
at com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.(ElasticSearchIndex.java:136)
... 20 more


Reply to this email directly or view it on GitHubhttps://github.com/thinkaurelius/titan/issues/253
.

Matthias Broecheler
http://www.matthiasb.com

@ghost ghost assigned okram May 3, 2013

@okram

This comment has been minimized.

Show comment
Hide comment
@okram

okram May 6, 2013

Contributor

Harish will be testing this today and will report back.

Contributor

okram commented May 6, 2013

Harish will be testing this today and will report back.

@hariharshankar

This comment has been minimized.

Show comment
Hide comment
@hariharshankar

hariharshankar May 7, 2013

As Matthais mentioned, increasing the number of threads per process seems to have solved the problem.
The ulimit -u value was increased from the default 1024 to 1024*10.

As Matthais mentioned, increasing the number of threads per process seems to have solved the problem.
The ulimit -u value was increased from the default 1024 to 1024*10.

@okram

This comment has been minimized.

Show comment
Hide comment
@okram

okram May 7, 2013

Contributor

I added this to the Useful Tips section of the Wiki.

https://github.com/thinkaurelius/titan/wiki/Titan-Limitations#useful-tips

On May 7, 2013, at 9:21 AM, hariharshankar notifications@github.com wrote:

As Matthais mentioned, increasing the number of threads per process seems to have solved the problem.
The ulimit -u value was increased from the default 1024 to 1024*10.


Reply to this email directly or view it on GitHub.

Contributor

okram commented May 7, 2013

I added this to the Useful Tips section of the Wiki.

https://github.com/thinkaurelius/titan/wiki/Titan-Limitations#useful-tips

On May 7, 2013, at 9:21 AM, hariharshankar notifications@github.com wrote:

As Matthais mentioned, increasing the number of threads per process seems to have solved the problem.
The ulimit -u value was increased from the default 1024 to 1024*10.


Reply to this email directly or view it on GitHub.

@Ningshiqi

This comment has been minimized.

Show comment
Hide comment
@Ningshiqi

Ningshiqi Jul 11, 2017

@mbroecheler Thank you! It works.It because the threads limit is too low!

@mbroecheler Thank you! It works.It because the threads limit is too low!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment