Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot change index.queue_size #38907

Closed
tylfin opened this issue Feb 14, 2019 · 5 comments
Closed

Cannot change index.queue_size #38907

tylfin opened this issue Feb 14, 2019 · 5 comments

Comments

@tylfin
Copy link

tylfin commented Feb 14, 2019

Description

I want to increase the queue capacity for my ingest nodes, but I'm seeing different things in the docs:

  • An example in the 6.5 docs about setting the queue_size via the elasticsearch.yml
  • A deprecated warning in the log outputs
  • Breaking changes that say

The bulk thread pool has been renamed to the write thread pool. This change was made to reflect the fact that this thread pool is used to execute all write operations: single-document index/delete/update requests, as well as bulk requests

So setting both thread_pool.index.queue_size: 500 and thread_pool.write.queue_size: 500 I still get logs saying queue capacity = 200.

These are running on beefy boxes, and according to monitoring via Kibana are showing 3% CPU utilization and 50% JVM memory. I don't want to scale horizontally until these are better utilized.

What's going on here?

Elasticsearch version (bin/elasticsearch --version):

# elasticsearch --version
Version: 6.5.4, Build: default/tar/d2ef93d/2018-12-17T21:17:40.758843Z, JVM: 11.0.1

OS version (uname -a if on a Unix-like system):

Linux es-logging-client-5b8b9b966d-gqb4h 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:
I set the thread_pool.index.queue_size according to the doc example https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-threadpool.html#types

Steps to reproduce:

Please include a minimal but complete recreation of the problem, including
(e.g.) index creation, mappings, settings, query etc. The easier you make for
us to reproduce it, the more likely that somebody will take the time to look at it.

  1. Set thread_pool.index.queue_size: 500
  2. Set thread_pool.write.queue_size: 500
  3. Get deprecated warning
  4. Get queued tasks = 200

Provide logs (if relevant):

Deprecated?

[es-logging-client-5b8b9b966d-gqb4h] [thread_pool.index.queue_size] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2019-02-14T15:46:45,612][WARN ][o.e.x.m.e.l.LocalExporter] [es-logging-client-5b8b9b966d-gqb4h] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportException[[es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of processing of [17279822][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-6-2019.02.14][0]] containing [index {[.monitoring-kibana-6-2019.02.14][doc][fMew7GgB1_IsaVrtVI3q], source[{"cluster_uuid":"M5HlxPzwTt2aYdtk2Qy0bQ","timestamp":"2019-02-14T15:46:45.606Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"cQKLh37WRRui5NHeLYBUpA","host":"10.203.11.177","transport_address":"10.203.11.177:9300","ip":"10.203.11.177","name":"es-logging-client-5b8b9b966d-gqb4h","timestamp":"2019-02-14T15:46:45.606Z"},"kibana_stats":{"kibana":{"uuid":"342991d9-41ca-4c34-ab20-092bd06bb475","name":"kibana","index":".kibana","host":"0","transport_address":"0:5601","version":"6.5.4","snapshot":false,"status":"green"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}},"index":".kibana","dashboard":{"total":3},"visualization":{"total":41},"search":{"total":23},"index_pattern":{"total":2},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":true,"enabled":true,"count":2},"reporting":{"available":true,"enabled":true,"browser_type":"chromium","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}}}}}]}], target allocation id: BFk1CzPxRUSrEzVXYt2Kjg, primary term: 1 on EsThreadPoolExecutor[name = es-logging-data-1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5f821da9[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 6422152]]];
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[x-pack-monitoring-6.5.4.jar:6.5.4]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
	at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) [x-pack-monitoring-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) [x-pack-monitoring-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:607) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:414) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:409) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:901) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:859) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1130) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.lambda$handleException$32(TcpTransport.java:1268) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:135) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1266) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1258) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1188) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:65) [transport-netty4-client-6.5.4.jar:6.5.4]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-common-4.1.30.Final.jar:4.1.30.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s]]
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s][p]]
Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of processing of [17279822][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-6-2019.02.14][0]] containing [index {[.monitoring-kibana-6-2019.02.14][doc][fMew7GgB1_IsaVrtVI3q], source[{"cluster_uuid":"M5HlxPzwTt2aYdtk2Qy0bQ","timestamp":"2019-02-14T15:46:45.606Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"cQKLh37WRRui5NHeLYBUpA","host":"10.203.11.177","transport_address":"10.203.11.177:9300","ip":"10.203.11.177","name":"es-logging-client-5b8b9b966d-gqb4h","timestamp":"2019-02-14T15:46:45.606Z"},"kibana_stats":{"kibana":{"uuid":"342991d9-41ca-4c34-ab20-092bd06bb475","name":"kibana","index":".kibana","host":"0","transport_address":"0:5601","version":"6.5.4","snapshot":false,"status":"green"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}},"index":".kibana","dashboard":{"total":3},"visualization":{"total":41},"search":{"total":23},"index_pattern":{"total":2},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":true,"enabled":true,"count":2},"reporting":{"available":true,"enabled":true,"browser_type":"chromium","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}}}}}]}], target allocation id: BFk1CzPxRUSrEzVXYt2Kjg, primary term: 1 on EsThreadPoolExecutor[name = es-logging-data-1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5f821da9[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 6422152]]
	at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:48) ~[elasticsearch-6.5.4.jar:6.5.4]
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355) ~[?:?]
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.doExecute(EsThreadPoolExecutor.java:98) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:93) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:713) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.access$000(TransportService.java:82) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:151) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:657) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$1.sendRequest(SecurityServerTransportInterceptor.java:137) ~[?:?]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:572) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:560) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performAction(TransportReplicationAction.java:830) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performLocalAction(TransportReplicationAction.java:748) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:736) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:169) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:97) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:126) ~[?:?]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:251) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:243) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1350) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:135) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1308) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1172) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:65) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[?:?]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) ~[?:?]
[2019-02-14T15:46:45,816][WARN ][o.e.x.m.e.l.LocalExporter] [es-logging-client-5b8b9b966d-gqb4h] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportException[[es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of processing of [17279828][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-6-2019.02.14][0]] containing [index {[.monitoring-kibana-6-2019.02.14][doc][uMew7GgB1_IsaVrtVY22], source[{"cluster_uuid":"M5HlxPzwTt2aYdtk2Qy0bQ","timestamp":"2019-02-14T15:46:45.810Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"cQKLh37WRRui5NHeLYBUpA","host":"10.203.11.177","transport_address":"10.203.11.177:9300","ip":"10.203.11.177","name":"es-logging-client-5b8b9b966d-gqb4h","timestamp":"2019-02-14T15:46:45.810Z"},"kibana_stats":{"kibana":{"uuid":"ed642391-1410-409c-b8c7-b40f1e49ac8f","name":"kibana","index":".kibana","host":"0","transport_address":"0:5601","version":"6.5.4","snapshot":false,"status":"green"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}},"index":".kibana","dashboard":{"total":3},"visualization":{"total":41},"search":{"total":23},"index_pattern":{"total":2},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":true,"enabled":true,"count":2},"reporting":{"available":true,"enabled":true,"browser_type":"chromium","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}}}}}]}], target allocation id: BFk1CzPxRUSrEzVXYt2Kjg, primary term: 1 on EsThreadPoolExecutor[name = es-logging-data-1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5f821da9[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 6422154]]];
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[x-pack-monitoring-6.5.4.jar:6.5.4]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
	at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) [x-pack-monitoring-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) [x-pack-monitoring-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:607) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:414) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:409) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:901) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:859) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1130) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.lambda$handleException$32(TcpTransport.java:1268) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:135) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1266) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1258) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1188) [elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:65) [transport-netty4-client-6.5.4.jar:6.5.4]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-common-4.1.30.Final.jar:4.1.30.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s]]
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s][p]]
Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of processing of [17279828][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-6-2019.02.14][0]] containing [index {[.monitoring-kibana-6-2019.02.14][doc][uMew7GgB1_IsaVrtVY22], source[{"cluster_uuid":"M5HlxPzwTt2aYdtk2Qy0bQ","timestamp":"2019-02-14T15:46:45.810Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"cQKLh37WRRui5NHeLYBUpA","host":"10.203.11.177","transport_address":"10.203.11.177:9300","ip":"10.203.11.177","name":"es-logging-client-5b8b9b966d-gqb4h","timestamp":"2019-02-14T15:46:45.810Z"},"kibana_stats":{"kibana":{"uuid":"ed642391-1410-409c-b8c7-b40f1e49ac8f","name":"kibana","index":".kibana","host":"0","transport_address":"0:5601","version":"6.5.4","snapshot":false,"status":"green"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}},"index":".kibana","dashboard":{"total":3},"visualization":{"total":41},"search":{"total":23},"index_pattern":{"total":2},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":true,"enabled":true,"count":2},"reporting":{"available":true,"enabled":true,"browser_type":"chromium","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}}}}}]}], target allocation id: BFk1CzPxRUSrEzVXYt2Kjg, primary term: 1 on EsThreadPoolExecutor[name = es-logging-data-1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5f821da9[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 6422154]]
	at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:48) ~[elasticsearch-6.5.4.jar:6.5.4]
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355) ~[?:?]
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.doExecute(EsThreadPoolExecutor.java:98) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:93) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:713) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.access$000(TransportService.java:82) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:151) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:657) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$1.sendRequest(SecurityServerTransportInterceptor.java:137) ~[?:?]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:572) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:560) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performAction(TransportReplicationAction.java:830) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performLocalAction(TransportReplicationAction.java:748) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:736) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:169) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:97) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:126) ~[?:?]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:251) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:243) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1350) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:135) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1308) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1172) ~[elasticsearch-6.5.4.jar:6.5.4]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:65) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[?:?]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) ~[?:?]

Full elasticsearch.yml:

elasticsearch.yml:

thread_pool.index.queue_size: 500
thread_pool.write.queue_size: 500
cluster.name: ${CLUSTER_NAME}
node.name: ${HOSTNAME}
network.host: 0.0.0.0
node.ingest: ${NODE_INGEST}
node.data: ${NODE_DATA}
node.master: ${NODE_MASTER}
cluster.remote.connect: false
node.ml: false

discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["es-logging-master-0.es-logging-master.logging.svc.cluster.local.","es-logging-master-1.es-logging-master.logging.svc.cluster.local.","es-logging-master-2.es-logging-master.logging.svc.cluster.local."]
processors: 8
discovery.zen.fd.ping_timeout: 50s
discovery.zen.fd.ping_retries: 5
@jasontedor
Copy link
Member

jasontedor commented Feb 14, 2019

I want to increase the queue capacity for my ingest nodes

Did you indeed set this only on your ingest nodes? The relevant place is the data nodes, where the bulk shard requests are actually executed on the bulk thread pool.

All appears to be well, otherwise:

12:32:46 [jason@totoro:~/src/elastic/elasticsearch] retention-lease-ccr+ 130 ± echo "thread_pool.write.queue_size: 500" > /tmp/elasticsearch.yml; echo "network.host: 0.0.0.0" >> /tmp/elasticsearch.yml; docker run --rm -it -e discovery.type=single-node -p 9200:9200 -p 9300:9300 -v /tmp/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml docker.elastic.co/elasticsearch/elasticsearch:6.5.0
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-02-14T17:32:49,841][WARN ][o.e.c.l.LogConfigurator  ] [unknown] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
  /usr/share/elasticsearch/config/log4j2.properties
[2019-02-14T17:32:50,075][INFO ][o.e.e.NodeEnvironment    ] [WRXPJM6] using [1] data paths, mounts [[/ (overlay)]], net usable_space [23.1gb], net total_space [48.9gb], types [overlay]
[2019-02-14T17:32:50,076][INFO ][o.e.e.NodeEnvironment    ] [WRXPJM6] heap size [989.8mb], compressed ordinary object pointers [true]
[2019-02-14T17:32:50,077][INFO ][o.e.n.Node               ] [WRXPJM6] node name derived from node ID [WRXPJM68SBOmfLLRXzxleA]; set [node.name] to override
[2019-02-14T17:32:50,078][INFO ][o.e.n.Node               ] [WRXPJM6] version[6.5.0], pid[1], build[default/tar/816e6f6/2018-11-09T18:58:36.352602Z], OS[Linux/4.20.6-200.fc29.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-02-14T17:32:50,078][INFO ][o.e.n.Node               ] [WRXPJM6] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.dNKooJub, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [aggs-matrix-stats]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [analysis-common]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [ingest-common]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [lang-expression]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [lang-mustache]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [lang-painless]
[2019-02-14T17:32:51,237][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [mapper-extras]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [parent-join]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [percolator]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [rank-eval]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [reindex]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [repository-url]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [transport-netty4]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [tribe]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-ccr]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-core]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-deprecation]
[2019-02-14T17:32:51,238][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-graph]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-logstash]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-ml]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-monitoring]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-rollup]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-security]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-sql]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-upgrade]
[2019-02-14T17:32:51,239][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded module [x-pack-watcher]
[2019-02-14T17:32:51,240][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded plugin [ingest-geoip]
[2019-02-14T17:32:51,240][INFO ][o.e.p.PluginsService     ] [WRXPJM6] loaded plugin [ingest-user-agent]
[2019-02-14T17:32:53,615][INFO ][o.e.x.s.a.s.FileRolesStore] [WRXPJM6] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-02-14T17:32:53,965][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [WRXPJM6] [controller/82] [Main.cc@109] controller (64 bit): Version 6.5.0 (Build 71882a589e5556) Copyright (c) 2018 Elasticsearch BV
[2019-02-14T17:32:54,359][INFO ][o.e.d.DiscoveryModule    ] [WRXPJM6] using discovery type [single-node] and host providers [settings]
[2019-02-14T17:32:54,859][INFO ][o.e.n.Node               ] [WRXPJM6] initialized
[2019-02-14T17:32:54,859][INFO ][o.e.n.Node               ] [WRXPJM6] starting ...
[2019-02-14T17:32:55,008][INFO ][o.e.t.TransportService   ] [WRXPJM6] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-02-14T17:32:55,020][WARN ][o.e.b.BootstrapChecks    ] [WRXPJM6] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-02-14T17:32:55,059][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [WRXPJM6] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-02-14T17:32:55,059][INFO ][o.e.n.Node               ] [WRXPJM6] started
[2019-02-14T17:32:55,134][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [WRXPJM6] Failed to clear cache for realms [[]]
[2019-02-14T17:32:55,181][INFO ][o.e.g.GatewayService     ] [WRXPJM6] recovered [0] indices into cluster_state
[2019-02-14T17:32:55,322][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-02-14T17:32:55,352][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.watches] for index patterns [.watches*]
[2019-02-14T17:32:55,380][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-02-14T17:32:55,417][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-02-14T17:32:55,457][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-02-14T17:32:55,489][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-02-14T17:32:55,528][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-02-14T17:32:55,560][INFO ][o.e.c.m.MetaDataIndexTemplateService] [WRXPJM6] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-02-14T17:32:55,659][INFO ][o.e.l.LicenseService     ] [WRXPJM6] license [bf57109e-991d-4127-b942-1b8e28fa2819] mode [basic] - valid

And then:

12:32:58 [jason:~] $ curl -XGET "totoro.home.tedor.me:9200/_nodes/thread_pool?pretty=true&&filter_path=**.write"
{
  "nodes" : {
    "WRXPJM68SBOmfLLRXzxleA" : {
      "thread_pool" : {
        "write" : {
          "type" : "fixed",
          "min" : 20,
          "max" : 20,
          "queue_size" : 500
        }
      }
    }
  }
}

@tylfin
Copy link
Author

tylfin commented Feb 14, 2019

@jasontedor I assumed the error was propagating from the ingest nodes... I'll reboot the data nodes and see if the issue persists... 😶

@jasontedor
Copy link
Member

jasontedor commented Feb 14, 2019

No, you can see that in the exception message:

nested: RemoteTransportException[[es-logging-data-1][10.203.8.102:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of processing of [17279822][indices:data/write/bulk[s][p]]

It's a remote transport exception happening on es-logging-data-1, and the cause is a full queue on that node.

@tylfin
Copy link
Author

tylfin commented Feb 14, 2019

@jasontedor ++ yeah thanks my bad

@tylfin tylfin closed this as completed Feb 14, 2019
@jasontedor
Copy link
Member

No worries!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants