Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch is constantly falling #8477

Closed
cregev opened this issue Nov 13, 2014 · 5 comments
Closed

Elasticsearch is constantly falling #8477

cregev opened this issue Nov 13, 2014 · 5 comments
Assignees

Comments

@cregev
Copy link

cregev commented Nov 13, 2014

Hello,

We are running on Elasticsearch Version 1.3.4 having 3 dedicated Master nodes , 3 Nodata nodes , 15 Data nodes. Since the upgrade to 1.3* series we have suffered from a major instability issues with our ES cluster.

Basically what happens is that one of the master nodes gets a JavaOutOfMemory ,then the clusters elects a new master and he gets a JavaOutOfMemory too,in parallel one of the data nodes gets a JavaOutOfMemory.

The Strange thing is that each node that gets JavaOutOfMemory,the process of Elasticsearch is immediately killed and restarted(After the restart the node rejoins the cluster) but the cluster is unresponsive from this point(The point where a Master and a Data node get JavaOutOfMemory).
Nothing Helps to make the cluster recover from it's unresponsive status except a Full cluster restart.
Moreover, when looking at the cluster health using Es api "curl -XGET localhost:9200/_cluster/health?pretty" it shows that the cluster's health is yellow whereas when i take a look at the marvel dashboard it shows that all the indexes are not reporting for X time.

Having said that i can't understand how can it be that The health api shows the cluster is in yellow status, whereas marvel shows that all the indexes are not reporting.

Attached the log from today crash:
[2014-11-13 09:19:44,374][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap$KeySet.iterator(HashMap.java:912)
at java.util.HashSet.iterator(HashSet.java:172)
at java.util.Collections$UnmodifiableCollection$1.(Collections.java:1099)
at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1098)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:119)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:19:45,292][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 09:19:45,292][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 09:20:16,184][WARN ][index.merge.scheduler ] [elasticsearch-prod-hist06] [2014_11][3] failed to merge
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 09:20:20,503][ERROR][index.engine.internal ] [elasticsearch-prod-hist06] [2014_11][3] failed to acquire searcher, source search
org.apache.lucene.store.AlreadyClosedException: this ReferenceManager is closed
at org.apache.lucene.search.ReferenceManager.acquire(ReferenceManager.java:98)
at org.elasticsearch.index.engine.internal.InternalEngine.acquireSearcher(InternalEngine.java:711)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:653)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:647)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:508)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:688)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:677)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:20:20,530][ERROR][index.engine.internal ] [elasticsearch-prod-hist06] [2014_11][3] failed to acquire searcher, source search
org.apache.lucene.store.AlreadyClosedException: this ReferenceManager is closed
at org.apache.lucene.search.ReferenceManager.acquire(ReferenceManager.java:98)
at org.elasticsearch.index.engine.internal.InternalEngine.acquireSearcher(InternalEngine.java:711)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:653)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:647)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:508)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:688)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:677)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:20:20,522][ERROR][index.engine.internal ] [elasticsearch-prod-hist06] [2014_11][3] failed to acquire searcher, source search
org.apache.lucene.store.AlreadyClosedException: this ReferenceManager is closed
at org.apache.lucene.search.ReferenceManager.acquire(ReferenceManager.java:98)
at org.elasticsearch.index.engine.internal.InternalEngine.acquireSearcher(InternalEngine.java:711)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:653)
at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:647)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:508)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:688)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:677)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:28:41,136][INFO ][action.admin.cluster.node.shutdown] [elasticsearch-prod-hist06] shutting down in [200ms]
[2014-11-13 09:28:41,350][INFO ][action.admin.cluster.node.shutdown] [elasticsearch-prod-hist06] initiating requested shutdown...
[2014-11-13 09:28:41,351][INFO ][node ] [elasticsearch-prod-hist06] stopping ...
[2014-11-13 09:28:41,468][INFO ][discovery.ec2 ] [elasticsearch-prod-hist06] master_left [[elasticsearch-prod-hist-master01][DWwFU-GDRsWOS4SiS8ja7Q][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}], reason [transport disconnected (with verified connect)]
[2014-11-13 09:28:41,488][INFO ][discovery.ec2 ] [elasticsearch-prod-hist06] master_left [[elasticsearch-prod-hist-master02][jyV18jyuRu2tk0fkRnPetA][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}], reason [failed to perform initial connect [[elasticsearch-prod-hist-master02][inet[/10.179.174.119:9300]] connect_timeout[30s]]]
[2014-11-13 09:28:41,488][INFO ][cluster.service ] [elasticsearch-prod-hist06] master {new [elasticsearch-prod-hist-master02][jyV18jyuRu2tk0fkRnPetA][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}, previous [elasticsearch-prod-hist-master01][DWwFU-GDRsWOS4SiS8ja7Q][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}}, removed {[elasticsearch-prod-hist-master01][DWwFU-GDRsWOS4SiS8ja7Q][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},}, reason: zen-disco-master_failed ([elasticsearch-prod-hist-master01][DWwFU-GDRsWOS4SiS8ja7Q][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true})
[2014-11-13 09:28:41,497][WARN ][discovery.ec2 ] [elasticsearch-prod-hist06] not enough master nodes after master left (reason = failed to perform initial connect [[elasticsearch-prod-hist-master02][inet[/10.179.174.119:9300]] connect_timeout[30s]]), current nodes: {[elasticsearch-prod-hist11][SD_0-pzqRV6Nd5sLEWklKg][elasticsearch-prod-hist11.totango.com][inet[/10.218.139.4:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist05][l0oz1_vRQOuM2tgv9jlBVQ][elasticsearch-prod-hist05.totango.com][inet[/10.30.193.43:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd03][NMHy5dyvTZeyZEaODIrr7A][elasticsearch-prod-hist-nd03.totango.com][inet[/10.69.53.78:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist10][2MKPMbOtTraoVgP-tz_60Q][elasticsearch-prod-hist10.totango.com][inet[/10.144.216.229:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist06][Ekp72e2-Sl2wXZWIP7L22w][elasticsearch-prod-hist06.totango.com][inet[ip-10-63-144-12.ec2.internal/10.63.144.12:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist14][HuHAIiKWR4-r_-Y01y7uyQ][elasticsearch-prod-hist14.totango.com][inet[/10.69.17.49:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist07][vuHFjWwoR46_7Mbu-hwJ_g][elasticsearch-prod-hist07.totango.com][inet[/10.146.186.210:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist15][5KzNeqYLQuqLIVYdhVgc8Q][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist09][y6eEqkI_Tqmw-CI9UH3zCw][elasticsearch-prod-hist09.totango.com][inet[/10.231.52.214:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist08][PHir_YKIQMu_TWzT9XfL_Q][elasticsearch-prod-hist08.totango.com][inet[/10.101.151.169:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd02][dtiEiaMmQgGYgcFiFsJpTA][elasticsearch-prod-hist-nd02.totango.com][inet[/10.181.32.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-master03][uhrR_7uAT2ml1pYeWdQLKw][elasticsearch-prod-hist-master03.totango.com][inet[/10.154.233.247:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist01][aTACc8Y6StGq5tLEQwPTVw][elasticsearch-prod-hist01.totango.com][inet[/10.143.223.48:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist04][1FMiCjVOSXSSdowYKClEIA][elasticsearch-prod-hist04.totango.com][inet[/10.182.54.85:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist03][HhAGRdUaSpKKV4VxjfVymg][elasticsearch-prod-hist03.totango.com][inet[/10.7.144.161:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd01][SARDt4MpRv2nXSBZ3IbB3Q][elasticsearch-prod-hist-nd01.totango.com][inet[/10.153.214.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist02][NjMPR4fRSUi0mIkgfuaoJw][elasticsearch-prod-hist02.totango.com][inet[/10.41.173.149:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist13][5c4jQFWiSyKGeVLneeiqXg][elasticsearch-prod-hist13.totango.com][inet[/10.179.146.242:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist12][BcPwD4AtRd-gNdw3UHBXEA][elasticsearch-prod-hist12.totango.com][inet[/10.67.142.94:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}
[2014-11-13 09:28:41,498][INFO ][cluster.service ] [elasticsearch-prod-hist06] removed {[elasticsearch-prod-hist11][SD_0-pzqRV6Nd5sLEWklKg][elasticsearch-prod-hist11.totango.com][inet[/10.218.139.4:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist05][l0oz1_vRQOuM2tgv9jlBVQ][elasticsearch-prod-hist05.totango.com][inet[/10.30.193.43:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd03][NMHy5dyvTZeyZEaODIrr7A][elasticsearch-prod-hist-nd03.totango.com][inet[/10.69.53.78:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist10][2MKPMbOtTraoVgP-tz_60Q][elasticsearch-prod-hist10.totango.com][inet[/10.144.216.229:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist14][HuHAIiKWR4-r_-Y01y7uyQ][elasticsearch-prod-hist14.totango.com][inet[/10.69.17.49:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist07][vuHFjWwoR46_7Mbu-hwJ_g][elasticsearch-prod-hist07.totango.com][inet[/10.146.186.210:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist15][5KzNeqYLQuqLIVYdhVgc8Q][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist09][y6eEqkI_Tqmw-CI9UH3zCw][elasticsearch-prod-hist09.totango.com][inet[/10.231.52.214:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist08][PHir_YKIQMu_TWzT9XfL_Q][elasticsearch-prod-hist08.totango.com][inet[/10.101.151.169:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-master02][jyV18jyuRu2tk0fkRnPetA][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist-nd02][dtiEiaMmQgGYgcFiFsJpTA][elasticsearch-prod-hist-nd02.totango.com][inet[/10.181.32.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-master03][uhrR_7uAT2ml1pYeWdQLKw][elasticsearch-prod-hist-master03.totango.com][inet[/10.154.233.247:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist01][aTACc8Y6StGq5tLEQwPTVw][elasticsearch-prod-hist01.totango.com][inet[/10.143.223.48:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist04][1FMiCjVOSXSSdowYKClEIA][elasticsearch-prod-hist04.totango.com][inet[/10.182.54.85:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist03][HhAGRdUaSpKKV4VxjfVymg][elasticsearch-prod-hist03.totango.com][inet[/10.7.144.161:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd01][SARDt4MpRv2nXSBZ3IbB3Q][elasticsearch-prod-hist-nd01.totango.com][inet[/10.153.214.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist02][NjMPR4fRSUi0mIkgfuaoJw][elasticsearch-prod-hist02.totango.com][inet[/10.41.173.149:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist13][5c4jQFWiSyKGeVLneeiqXg][elasticsearch-prod-hist13.totango.com][inet[/10.179.146.242:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist12][BcPwD4AtRd-gNdw3UHBXEA][elasticsearch-prod-hist12.totango.com][inet[/10.67.142.94:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-master_failed ([elasticsearch-prod-hist-master02][jyV18jyuRu2tk0fkRnPetA][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true})
[2014-11-13 09:28:41,505][WARN ][action.bulk ] [elasticsearch-prod-hist06] Failed to perform bulk/shard on remote replica [elasticsearch-prod-hist15][5KzNeqYLQuqLIVYdhVgc8Q][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false}[2014_11][1]
org.elasticsearch.transport.NodeDisconnectedException: [elasticsearch-prod-hist15][inet[/10.203.179.73:9300]][bulk/shard/replica] disconnected
[2014-11-13 09:28:41,505][WARN ][action.bulk ] [elasticsearch-prod-hist06] Failed to perform bulk/shard on remote replica [elasticsearch-prod-hist15][5KzNeqYLQuqLIVYdhVgc8Q][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false}[2014_11][1]
org.elasticsearch.transport.NodeDisconnectedException: [elasticsearch-prod-hist15][inet[/10.203.179.73:9300]][bulk/shard/replica] disconnected
[2014-11-13 09:28:41,506][WARN ][cluster.action.shard ] [elasticsearch-prod-hist06] can't send shard failed for [2014_11][1], node[5KzNeqYLQuqLIVYdhVgc8Q], [R], s[STARTED]. no master known.
[2014-11-13 09:28:41,507][WARN ][cluster.action.shard ] [elasticsearch-prod-hist06] can't send shard failed for [2014_11][1], node[5KzNeqYLQuqLIVYdhVgc8Q], [R], s[STARTED]. no master known.
[2014-11-13 09:28:41,984][WARN ][discovery.zen.ping.unicast] [elasticsearch-prod-hist06] failed to send ping to [[#cloud-i-c361d029-0][elasticsearch-prod-hist06.totango.com][inet[/10.7.144.161:9300]]]
org.elasticsearch.transport.RemoteTransportException: [elasticsearch-prod-hist03][inet[/10.7.144.161:9300]][discovery/zen/unicast]
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: received ping request while stopped/closed
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.handlePingRequest(UnicastZenPing.java:392)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$2400(UnicastZenPing.java:59)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:430)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:414)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:28:41,984][WARN ][discovery.zen.ping.unicast] [elasticsearch-prod-hist06] failed to send ping to [[#cloud-i-e76adb0d-0][elasticsearch-prod-hist06.totango.com][inet[/10.30.193.43:9300]]]
org.elasticsearch.transport.RemoteTransportException: [elasticsearch-prod-hist05][inet[/10.30.193.43:9300]][discovery/zen/unicast]
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: received ping request while stopped/closed
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.handlePingRequest(UnicastZenPing.java:392)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$2400(UnicastZenPing.java:59)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:430)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:414)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:28:41,984][WARN ][discovery.zen.ping.unicast] [elasticsearch-prod-hist06] failed to send ping to [[#cloud-i-3a6edfd0-0][elasticsearch-prod-hist06.totango.com][inet[/10.69.17.49:9300]]]
org.elasticsearch.transport.RemoteTransportException: [elasticsearch-prod-hist14][inet[/10.69.17.49:9300]][discovery/zen/unicast]
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: received ping request while stopped/closed
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.handlePingRequest(UnicastZenPing.java:392)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$2400(UnicastZenPing.java:59)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:430)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:414)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:28:41,984][WARN ][discovery.zen.ping.unicast] [elasticsearch-prod-hist06] failed to send ping to [[#cloud-i-bd51e057-0][elasticsearch-prod-hist06.totango.com][inet[/10.179.146.242:9300]]]
org.elasticsearch.transport.RemoteTransportException: [elasticsearch-prod-hist13][inet[/10.179.146.242:9300]][discovery/zen/unicast]
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: received ping request while stopped/closed
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.handlePingRequest(UnicastZenPing.java:392)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$2400(UnicastZenPing.java:59)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:430)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$UnicastPingRequestHandler.messageReceived(UnicastZenPing.java:414)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:28:42,207][INFO ][node ] [elasticsearch-prod-hist06] stopped
[2014-11-13 09:28:42,207][INFO ][node ] [elasticsearch-prod-hist06] closing ...
[2014-11-13 09:28:42,221][INFO ][node ] [elasticsearch-prod-hist06] closed
[2014-11-13 09:34:14,165][INFO ][node ] [elasticsearch-prod-hist06] version[1.3.4], pid[27430], build[a70f3cc/2014-09-30T09:07:17Z]
[2014-11-13 09:34:14,165][INFO ][node ] [elasticsearch-prod-hist06] initializing ...
[2014-11-13 09:34:14,380][INFO ][plugins ] [elasticsearch-prod-hist06] loaded [cloud-aws, marvel], sites [marvel]
[2014-11-13 09:34:18,354][INFO ][node ] [elasticsearch-prod-hist06] initialized
[2014-11-13 09:34:18,354][INFO ][node ] [elasticsearch-prod-hist06] starting ...
[2014-11-13 09:34:18,479][INFO ][transport ] [elasticsearch-prod-hist06] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.63.144.12:9300]}
[2014-11-13 09:34:18,542][INFO ][discovery ] [elasticsearch-prod-hist06] totango_prod_hist/AdEjYlsIRw6W0p-NahHGKA
[2014-11-13 09:34:30,557][INFO ][cluster.service ] [elasticsearch-prod-hist06] detected_master [elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}, added {[elasticsearch-prod-hist13][7iNunaxMQdGB8dic_Y2lXg][elasticsearch-prod-hist13.totango.com][inet[/10.179.146.242:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist05][M2XgBd9JSmWHqRQZ5cgj5Q][elasticsearch-prod-hist05.totango.com][inet[/10.30.193.43:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd03][Xp6HPZorQJKsZM4PjnXBog][elasticsearch-prod-hist-nd03.totango.com][inet[/10.69.53.78:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-master03][v_8fEEtiRSW70Uj4PKJJvg][elasticsearch-prod-hist-master03.totango.com][inet[/10.154.233.247:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist09][ZcjG4XaeQr6gh-a_DkNIWw][elasticsearch-prod-hist09.totango.com][inet[/10.231.52.214:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-master01][Soj39jZ-RJ-r-k3-okNhGw][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist-nd02][DauYY04MQei6CfVw6Xvkqw][elasticsearch-prod-hist-nd02.totango.com][inet[/10.181.32.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-nd01][Ch6aYSnDQkO3H_-aXXDckw][elasticsearch-prod-hist-nd01.totango.com][inet[/10.153.214.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist15][qAn5cAEGSLS9YvcI7pDoOA][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist01][HfAEifKqS86YIuTy0yLyxQ][elasticsearch-prod-hist01.totango.com][inet[/10.143.223.48:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist07][BqAauJRMS0yDcSgXkdrGIQ][elasticsearch-prod-hist07.totango.com][inet[/10.146.186.210:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist08][WXMjYXo-QbKE1eWOQebciw][elasticsearch-prod-hist08.totango.com][inet[/10.101.151.169:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist11][VbuIH5Y3THKLeJlXO2kIug][elasticsearch-prod-hist11.totango.com][inet[/10.218.139.4:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist14][e9hJgHCnQQuhdZPu5ImWjw][elasticsearch-prod-hist14.totango.com][inet[/10.69.17.49:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist02][ZRjhMm4QShu8EEuR8Q-HVw][elasticsearch-prod-hist02.totango.com][inet[/10.41.173.149:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-receive(from master [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}])
[2014-11-13 09:34:30,809][INFO ][http ] [elasticsearch-prod-hist06] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.63.144.12:9200]}
[2014-11-13 09:34:30,810][INFO ][node ] [elasticsearch-prod-hist06] started
[2014-11-13 09:34:31,550][INFO ][cluster.service ] [elasticsearch-prod-hist06] added {[elasticsearch-prod-hist10][E4y5yg4YRCiEkgiiz2t9HQ][elasticsearch-prod-hist10.totango.com][inet[/10.144.216.229:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-receive(from master [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}])
[2014-11-13 09:34:32,559][INFO ][cluster.service ] [elasticsearch-prod-hist06] added {[elasticsearch-prod-hist04][oSanqbQPQA2Oi0yxLEa9hA][elasticsearch-prod-hist04.totango.com][inet[/10.182.54.85:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-receive(from master [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}])
[2014-11-13 09:34:33,569][INFO ][cluster.service ] [elasticsearch-prod-hist06] added {[elasticsearch-prod-hist12][mNgZpVfrT5-IrtDGPWfURQ][elasticsearch-prod-hist12.totango.com][inet[/10.67.142.94:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-receive(from master [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}])
[2014-11-13 09:34:34,580][INFO ][cluster.service ] [elasticsearch-prod-hist06] added {[elasticsearch-prod-hist03][lAj9n3NDQUOv9IbWtlLq4A][elasticsearch-prod-hist03.totango.com][inet[/10.7.144.161:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-receive(from master [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}])
[2014-11-13 09:57:47,420][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:68)
at org.elasticsearch.common.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
[2014-11-13 09:59:03,811][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 09:59:04,613][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 09:59:04,643][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:30,821][WARN ][transport.netty ] [elasticsearch-prod-hist06] exception caught on transport layer [[id: 0x7eaa6ee8, /10.179.174.119:48642 => /10.63.144.12:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:40,859][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:40,860][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:40,859][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:49,551][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:02:51,862][WARN ][transport.netty ] [elasticsearch-prod-hist06] exception caught on transport layer [[id: 0x7eaa6ee8, /10.179.174.119:48642 => /10.63.144.12:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:04:16,167][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:04:16,167][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:04:16,168][WARN ][transport.netty ] [elasticsearch-prod-hist06] exception caught on transport layer [[id: 0x1b5ad6c8, /10.63.144.12:53190 => /10.179.146.242:9300]], closing connection
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:04:17,117][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:04:16,167][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:06:42,143][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2014-11-13 10:07:13,079][INFO ][discovery.ec2 ] [elasticsearch-prod-hist06] master_left [[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2014-11-13 10:07:13,755][INFO ][cluster.service ] [elasticsearch-prod-hist06] master {new [elasticsearch-prod-hist-master01][Soj39jZ-RJ-r-k3-okNhGw][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}, previous [elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}}, removed {[elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},}, reason: zen-disco-master_failed ([elasticsearch-prod-hist-master02][Ln7bOuJrS_akmcrMQ5QB8Q][elasticsearch-prod-hist-master02.totango.com][inet[/10.179.174.119:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true})
[2014-11-13 10:07:18,907][INFO ][discovery.ec2 ] [elasticsearch-prod-hist06] master_left [[elasticsearch-prod-hist-master01][Soj39jZ-RJ-r-k3-okNhGw][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true}], reason [no longer master]
[2014-11-13 10:07:18,908][WARN ][discovery.ec2 ] [elasticsearch-prod-hist06] not enough master nodes after master left (reason = no longer master), current nodes: {[elasticsearch-prod-hist13][7iNunaxMQdGB8dic_Y2lXg][elasticsearch-prod-hist13.totango.com][inet[/10.179.146.242:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist05][M2XgBd9JSmWHqRQZ5cgj5Q][elasticsearch-prod-hist05.totango.com][inet[/10.30.193.43:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist12][mNgZpVfrT5-IrtDGPWfURQ][elasticsearch-prod-hist12.totango.com][inet[/10.67.142.94:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist03][lAj9n3NDQUOv9IbWtlLq4A][elasticsearch-prod-hist03.totango.com][inet[/10.7.144.161:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd03][Xp6HPZorQJKsZM4PjnXBog][elasticsearch-prod-hist-nd03.totango.com][inet[/10.69.53.78:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-master03][v_8fEEtiRSW70Uj4PKJJvg][elasticsearch-prod-hist-master03.totango.com][inet[/10.154.233.247:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist09][ZcjG4XaeQr6gh-a_DkNIWw][elasticsearch-prod-hist09.totango.com][inet[/10.231.52.214:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist10][E4y5yg4YRCiEkgiiz2t9HQ][elasticsearch-prod-hist10.totango.com][inet[/10.144.216.229:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd02][DauYY04MQei6CfVw6Xvkqw][elasticsearch-prod-hist-nd02.totango.com][inet[/10.181.32.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-nd01][Ch6aYSnDQkO3H_-aXXDckw][elasticsearch-prod-hist-nd01.totango.com][inet[/10.153.214.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist04][oSanqbQPQA2Oi0yxLEa9hA][elasticsearch-prod-hist04.totango.com][inet[/10.182.54.85:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist15][qAn5cAEGSLS9YvcI7pDoOA][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist01][HfAEifKqS86YIuTy0yLyxQ][elasticsearch-prod-hist01.totango.com][inet[/10.143.223.48:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist06][AdEjYlsIRw6W0p-NahHGKA][elasticsearch-prod-hist06.totango.com][inet[ip-10-63-144-12.ec2.internal/10.63.144.12:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist07][BqAauJRMS0yDcSgXkdrGIQ][elasticsearch-prod-hist07.totango.com][inet[/10.146.186.210:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist08][WXMjYXo-QbKE1eWOQebciw][elasticsearch-prod-hist08.totango.com][inet[/10.101.151.169:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist11][VbuIH5Y3THKLeJlXO2kIug][elasticsearch-prod-hist11.totango.com][inet[/10.218.139.4:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist14][e9hJgHCnQQuhdZPu5ImWjw][elasticsearch-prod-hist14.totango.com][inet[/10.69.17.49:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist02][ZRjhMm4QShu8EEuR8Q-HVw][elasticsearch-prod-hist02.totango.com][inet[/10.41.173.149:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}
[2014-11-13 10:07:18,919][INFO ][cluster.service ] [elasticsearch-prod-hist06] removed {[elasticsearch-prod-hist13][7iNunaxMQdGB8dic_Y2lXg][elasticsearch-prod-hist13.totango.com][inet[/10.179.146.242:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist05][M2XgBd9JSmWHqRQZ5cgj5Q][elasticsearch-prod-hist05.totango.com][inet[/10.30.193.43:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist12][mNgZpVfrT5-IrtDGPWfURQ][elasticsearch-prod-hist12.totango.com][inet[/10.67.142.94:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist03][lAj9n3NDQUOv9IbWtlLq4A][elasticsearch-prod-hist03.totango.com][inet[/10.7.144.161:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-nd03][Xp6HPZorQJKsZM4PjnXBog][elasticsearch-prod-hist-nd03.totango.com][inet[/10.69.53.78:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-master03][v_8fEEtiRSW70Uj4PKJJvg][elasticsearch-prod-hist-master03.totango.com][inet[/10.154.233.247:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist09][ZcjG4XaeQr6gh-a_DkNIWw][elasticsearch-prod-hist09.totango.com][inet[/10.231.52.214:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist10][E4y5yg4YRCiEkgiiz2t9HQ][elasticsearch-prod-hist10.totango.com][inet[/10.144.216.229:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist-master01][Soj39jZ-RJ-r-k3-okNhGw][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true},[elasticsearch-prod-hist-nd02][DauYY04MQei6CfVw6Xvkqw][elasticsearch-prod-hist-nd02.totango.com][inet[/10.181.32.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist-nd01][Ch6aYSnDQkO3H_-aXXDckw][elasticsearch-prod-hist-nd01.totango.com][inet[/10.153.214.223:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=false},[elasticsearch-prod-hist04][oSanqbQPQA2Oi0yxLEa9hA][elasticsearch-prod-hist04.totango.com][inet[/10.182.54.85:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist15][qAn5cAEGSLS9YvcI7pDoOA][elasticsearch-prod-hist15.totango.com][inet[/10.203.179.73:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist01][HfAEifKqS86YIuTy0yLyxQ][elasticsearch-prod-hist01.totango.com][inet[/10.143.223.48:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist07][BqAauJRMS0yDcSgXkdrGIQ][elasticsearch-prod-hist07.totango.com][inet[/10.146.186.210:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist08][WXMjYXo-QbKE1eWOQebciw][elasticsearch-prod-hist08.totango.com][inet[/10.101.151.169:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist11][VbuIH5Y3THKLeJlXO2kIug][elasticsearch-prod-hist11.totango.com][inet[/10.218.139.4:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist14][e9hJgHCnQQuhdZPu5ImWjw][elasticsearch-prod-hist14.totango.com][inet[/10.69.17.49:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},[elasticsearch-prod-hist02][ZRjhMm4QShu8EEuR8Q-HVw][elasticsearch-prod-hist02.totango.com][inet[/10.41.173.149:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, master=false},}, reason: zen-disco-master_failed ([elasticsearch-prod-hist-master01][Soj39jZ-RJ-r-k3-okNhGw][elasticsearch-prod-hist-master01.totango.com][inet[/10.144.199.124:9300]]{max_local_storage_nodes=1, aws_availability_zone=us-east-1b, data=false, master=true})

Please advise?

Thanks,
Costya.

@clintongormley
Copy link

Hi @CostyaRegev

Nothing that you show from the logs here indicates the root of the problem. However, from previous conversations with your team, i know that you have very high filter cache eviction rates. In other words, you are caching lots of filters which you never reuse.

I think you're running into #8249. The filter cache size doesn't take the cache key into account, just the value. I think you're filling up your heap with all of these cache keys, where the key is much bigger than the value. Then you're never reusing these filters, so you keep adding more filters, until you OOM.

We made a change in 1.3.5 to make each cached entry count for a minimum amount, even if it was smaller, so that each cached entry has more weight. This should help you.

But the real fix (as we've told you before) is to figure out which filters should be cached and which should not, and to disable caching for those filters. Building a cached filter is more expensive than just using a filter, so it only makes sense to do on those filters which are frequently reused.

@clintongormley clintongormley self-assigned this Nov 14, 2014
@cregev
Copy link
Author

cregev commented Nov 14, 2014

Hi @clintongormley

We have disabled most of our caching by using doc_values for those fields(btw , we reindex all our cluster with new mapping settings (most of the fields are doc_values), but we are still facing problems with the cluster.

For example every time we restart our cluster our Indecies ID Cache starts with a relatively high number
which does not make sense because we are not using a parent-child queries, thus i need to run Elasticsearch's api for cleaning the ID cache after every restart.

How this situation can happen ?

Thanks,
Costya

@clintongormley
Copy link

@CostyaRegev doc values is to do with fielddata, not filter caching. When you see your memory usage rising, try clearing the filter cache:

POST /_cache/clear?filter

And see if your heap drops (it may take up to a minute). That will confirm my diagnosis.

For the ID cache - that is built eagerly, not at query time. And there is no point in clearing it because it will just be rebuilt (at least for new segments). However, if you are not using parent/child queries at all (which makes me wonder why you have the feature enabled) then you can set fielddata loading to lazy.

But really, if you need parent/child, then you need the memory to hold the IDs, in which case you either need more RAM or more nodes.

@cregev
Copy link
Author

cregev commented Nov 16, 2014

Hi Clinton ,

Our cluster crashed today couple of times , and indeed it helped clearing the filter cache after i cleared the filter cache the cluster was back to normal , but something strange that happened is:
One of the data nodes got JavaOutOfMemory , then i cleared the filter cache but the data nodes= did not return the cluster , moreover then that i entered the host and checked that the service of elasticsearch was running but when i tried "curl -XGET localhost:9200/_cluster/health?pretty " is did not succeed connecting the cluster i had to kill -9 the process in order to restart the service and only then the node joined back the cluster.

My question is this : when a node gets a JavaOutOfMemory should not it rejoin the cluster again after the process is restarted ?

Another anomaly that i found is that when i use "curl -XPOST 'http://localhost:9200/_shutdown'" the shutdown api it does not shutdown all the nodes in the cluster , only part of them...
Who can this situation happen?

Thanks,
Costya.

@clintongormley
Copy link

Hi @CostyaRegev

After an OOM, the node is in an undefine state and has to be restarted. It sounds like you are getting into a split brain condition. You need to deal with the underlying problem which sounds like it is the cache key issue. Upgrading will help, but really you need to stop caching filters which are not reused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants