Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Systemd startup timeout on 7.6 #60140

Closed
rossengeorgiev opened this issue Jul 23, 2020 · 6 comments · Fixed by #60345
Closed

Systemd startup timeout on 7.6 #60140

rossengeorgiev opened this issue Jul 23, 2020 · 6 comments · Fixed by #60345
Labels
>bug :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts Team:Delivery Meta label for Delivery team

Comments

@rossengeorgiev
Copy link

rossengeorgiev commented Jul 23, 2020

Elasticsearch version (bin/elasticsearch --version):

Version: 7.6.0, Build: default/rpm/7f634e9f44834fbc12724506cc1da681b0c3b1e3/2020-02-06T00:09:00.449973Z, JVM: 13.0.2

Plugins installed: None

JVM version (java -version): 13.0.2

OS version (uname -a if on a Unix-like system): CentOS 7

Description of the problem including expected versus actual behavior:

elasticsearch service is killed after timeout when attempting to start it. Expected behaviour would be to just start.

I noticed TimeoutStartSec is not set, and the service is using notify. I'm guessing the service is not sending a notification to extend the default timeout, and systemd kills it

Steps to reproduce:

  1. Cold start a VM 16GB ram, running single instance of elasticsearch
  2. Elasticserach service timeouts trying to start

No error message or anything in the elasticsearch.log

Provide logs (if relevant):

image

@rossengeorgiev rossengeorgiev added >bug needs:triage Requires assignment of a team area label labels Jul 23, 2020
@dnhatn dnhatn added :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts and removed needs:triage Requires assignment of a team area label labels Jul 23, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (:Core/Infra/Packaging)

@elasticmachine elasticmachine added the Team:Core/Infra Meta label for core/infra team label Jul 23, 2020
@rjernst
Copy link
Member

rjernst commented Jul 27, 2020

Thanks very much for your interest in Elasticsearch.

Your issue is most likely configuration and/or environmental, as we have tests for basic cases like "does the systemd service start?". While it may still be a bug on our end, we like to direct these kinds of things to the forums. There's an active community there that can help diagnose your issue, and provide suggestions on changes if necessary. This allows us to use GitHub for verified bug reports, feature requests, and pull requests.

Additionally, please note that images of text are both difficult to read, and are not searchable, so we ask that any text like systemd output be provided as quoted text. And even if "no error messages" exist in the elasticsearch.log, if there is an issue with Elasticsearch starting, it is best to include the log.

As this is unlikely to be a bug in Elasticsearch, I hope you don't mind that I close this issue. It can always be reopened in the future if further evidence points to a bug.

@rjernst rjernst closed this as completed Jul 27, 2020
@rossengeorgiev
Copy link
Author

hi @rjernst, the elasticsearch systemd service file, and notify integration is something shipped by elasticsearch. It is definitely something to look at. The reason I included a screenshot it because I was on a console, and obtaining text output would been too cumbersome. All relevant information is included and this is easily reproducible. I am incredibly disappointed at your response to this report.

@rjernst
Copy link
Member

rjernst commented Jul 28, 2020

@rossengeorgiev After looking back at our notify callback, I do see one potential edge case. We currently schedule a timer to fire every 15 seconds to extend the systemd timeout, but the first invocation is not immediate. So, if more than 15 seconds is taken for plugin initialization to happen, we would never get to our first extension, and the default systemd timeout would trigger.

I'm going to reopen this, and open a PR to set the initial systemd timeout to 75 seconds. This will give us 60 seconds for plugin initialization to occur. However, note that something is still odd in your system. Not a lot happens before plugin initialization, so something else is likely going on which you should investigate. It is also concerning that your Elasticsearch log shows nothing. It would be helpful to turn on trace logging (logger.level=trace) and include the full log output here. If my theory is correct, we will see timestamps indicating the systemd module is loaded more than 15 seconds after the first log message, and the trace logging should help identify what may be slowing down startup.

@rjernst rjernst reopened this Jul 28, 2020
rjernst added a commit to rjernst/elasticsearch that referenced this issue Jul 28, 2020
For systemd, while we are starting up, we notify the system every 15
seconds that we are still in the middle of starting up. However, if
initial startup before plugin initialization is slower than 15 seconds,
we won't ever get the chance to run the first timeout extension. This
commit sets the initial timeout to 75 seconds, up from the default 30
seconds used by systemd.

closes elastic#60140
rjernst added a commit that referenced this issue Jul 28, 2020
For systemd, while we are starting up, we notify the system every 15
seconds that we are still in the middle of starting up. However, if
initial startup before plugin initialization is slower than 15 seconds,
we won't ever get the chance to run the first timeout extension. This
commit sets the initial timeout to 75 seconds, up from the default 30
seconds used by systemd.

closes #60140
rjernst added a commit that referenced this issue Jul 28, 2020
For systemd, while we are starting up, we notify the system every 15
seconds that we are still in the middle of starting up. However, if
initial startup before plugin initialization is slower than 15 seconds,
we won't ever get the chance to run the first timeout extension. This
commit sets the initial timeout to 75 seconds, up from the default 30
seconds used by systemd.

closes #60140
@rossengeorgiev
Copy link
Author

Hi @rjernst, appreciate you coming back to the issue and taking a second look.

More context, I encounter the issue on VM after being restored from snapshot. Note that stopping elasticsearch prior to snapshot doesn't seem to make a difference. When I restore the VM, start up is slow, which I assume is due to data checking. Anyway, I went back to try again, and I've got you a trace log:

https://gist.githubusercontent.com/rossengeorgiev/5b90bf1793e6e5b709a6b409faad9d08/raw/85bc660b82174cb51ee1daead93e15a653587f3c/elasticsearch.log

@jamshid
Copy link

jamshid commented Oct 11, 2020

Thanks for filing and fixing this. FWIW:

I think I ran into it when upgrading 6.8.6 to 7.5.2. Debug logs wouldn't show anything wrong when "stopping ...", didn't even realize it was systemd itself doing it. After a few service restarts it stayed up.

From TRACE logs below maybe the problem is upgrades/cleanup of 6.8.6 indices took unexpectedly long? I'll try adding the 75s timeout but not sure that's enough.

[2020-10-09T17:18:16,071][TRACE][o.e.e.NodeEnvironment    ] [green4] obtaining node lock on /var/lib/elasticsearch/nodes/0 ...
[2020-10-09T17:18:16,093][TRACE][o.e.g.MetaDataStateFormat] [green4] found state file: /var/lib/elasticsearch/nodes/0/_state/node-20.st
...
[2020-10-09T17:18:16,175][DEBUG][o.e.e.NodeEnvironment    ] [green4] node data locations details:
 -> /var/lib/elasticsearch/nodes/0, free_space [656.4gb], usable_space [656.4gb], total_space [899gb], mount [/ (rootfs)], type [rootfs]
[2020-10-09T17:18:16,175][INFO ][o.e.e.NodeEnvironment    ] [green4] heap size [30.9gb], compressed ordinary object pointers [true]
[2020-10-09T17:18:21,242][INFO ][o.e.n.Node               ] [green4] node name [green4], node ID [JB7r9p1PT1mwfR_h1AhNfg], cluster name [ES68_GreenIvoryTestCluster]
[2020-10-09T17:18:21,242][INFO ][o.e.n.Node               ] [green4] version[7.5.2], pid[4808], build[default/rpm/8bec50e1e0ad29dad5653712cf3bb580cd1afcdf/2020-01-15T12:11:52.313576Z], OS[Linux/3.10.0-1062.18.1.el7.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]
[2020-10-09T17:18:21,243][INFO ][o.e.n.Node               ] [green4] JVM home [/usr/share/elasticsearch/jdk]
[2020-10-09T17:18:21,243][INFO ][o.e.n.Node               ] [green4] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms31g, -Xmx31g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-15343235074654645061, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=16642998272, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2020-10-09T17:18:21,243][DEBUG][o.e.n.Node               ] [green4] using config [/etc/elasticsearch], data [[/var/lib/elasticsearch]], logs [/var/log/elasticsearch], plugins [/usr/share/elasticsearch/plugins]
[2020-10-09T17:18:21,245][TRACE][o.e.p.PluginsService     ] [green4] --- adding [module] [/usr/share/elasticsearch/modules/aggs-matrix-stats]
...
[2020-10-09T17:18:23,898][INFO ][o.e.p.PluginsService     ] [green4] loaded module [x-pack-watcher]
[2020-10-09T17:18:23,899][INFO ][o.e.p.PluginsService     ] [green4] no plugins loaded
...
[2020-10-09T17:18:24,318][DEBUG][i.n.u.i.CleanerJava9     ] [green4] java.nio.ByteBuffer.cleaner(): unavailable
java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
	at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:68) [netty-common-4.1.43.Final.jar:4.1.43.Final]
...
[2020-10-09T17:18:24,330][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.noPreferDirect: true
[2020-10-09T17:18:26,659][DEBUG][o.e.s.ScriptService      ] [green4] using script cache with max_size [100], expire [0s]
[2020-10-09T17:18:26,749][TRACE][o.e.i.IndexSettings      ] [green4] [_na_] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
[2020-10-09T17:18:27,000][TRACE][o.e.i.IndexSettings      ] [green4] [_na_] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
[2020-10-09T17:18:27,146][DEBUG][o.e.d.z.ElectMasterService] [green4] using minimum_master_nodes [-1]
[2020-10-09T17:18:27,431][DEBUG][o.e.m.j.JvmGcMonitorService] [green4] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10]
[2020-10-09T17:18:27,447][DEBUG][o.e.m.o.OsService        ] [green4] using refresh_interval [1s]
[2020-10-09T17:18:27,450][DEBUG][o.e.m.p.ProcessService   ] [green4] using refresh_interval [1s]
[2020-10-09T17:18:27,457][DEBUG][o.e.m.j.JvmService       ] [green4] using refresh_interval [1s]
[2020-10-09T17:18:27,457][DEBUG][o.e.m.f.FsService        ] [green4] using refresh_interval [1s]
[2020-10-09T17:18:27,460][DEBUG][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] [green4] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2020-10-09T17:18:27,460][DEBUG][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] [green4] using [cluster_concurrent_rebalance] with [2]
[2020-10-09T17:18:27,528][DEBUG][o.e.c.r.a.d.ThrottlingAllocationDecider] [green4] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4]
[2020-10-09T17:18:28,178][TRACE][o.e.i.b.HierarchyCircuitBreakerService] [green4] parent circuit breaker with settings [!!!org.elasticsearch.indices.breaker.BreakerSettings@5e34a84b=>java.lang.NullPointerException:null!!!]
[2020-10-09T17:18:28,180][TRACE][o.e.i.b.request          ] [green4] creating ChildCircuitBreaker with settings [request,type=MEMORY,durability=TRANSIENT,limit=19929759744/18.5gb,overhead=1.0]
[2020-10-09T17:18:28,180][TRACE][o.e.i.b.fielddata        ] [green4] creating ChildCircuitBreaker with settings [fielddata,type=MEMORY,durability=PERMANENT,limit=13286506496/12.3gb,overhead=1.03]
[2020-10-09T17:18:28,181][TRACE][o.e.i.b.in_flight_requests] [green4] creating ChildCircuitBreaker with settings [in_flight_requests,type=MEMORY,durability=TRANSIENT,limit=33216266240/30.9gb,overhead=2.0]
[2020-10-09T17:18:28,181][TRACE][o.e.i.b.accounting       ] [green4] creating ChildCircuitBreaker with settings [accounting,type=MEMORY,durability=PERMANENT,limit=33216266240/30.9gb,overhead=1.0]
[2020-10-09T17:18:28,429][DEBUG][o.e.i.IndicesQueryCache  ] [green4] using [node] query cache with size [3gb] max filter count [10000]
[2020-10-09T17:18:28,433][DEBUG][o.e.i.IndexingMemoryController] [green4] using indexing buffer size [3gb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s]
[2020-10-09T17:18:29,204][TRACE][o.e.x.s.a.f.FileUserPasswdStore] [green4] reading users file [/etc/elasticsearch/users]...
[2020-10-09T17:18:29,205][DEBUG][o.e.x.s.a.f.FileUserPasswdStore] [green4] parsed [0] users from file [/etc/elasticsearch/users]
[2020-10-09T17:18:29,211][TRACE][o.e.x.s.a.f.FileUserRolesStore] [green4] reading users_roles file [/etc/elasticsearch/users_roles]...
[2020-10-09T17:18:29,211][DEBUG][o.e.x.s.a.f.FileUserRolesStore] [green4] parsed [0] user to role mappings from file [/etc/elasticsearch/users_roles]
[2020-10-09T17:18:29,246][DEBUG][o.e.x.s.a.s.FileRolesStore] [green4] attempting to read roles file located at [/etc/elasticsearch/roles.yml]
[2020-10-09T17:18:29,247][INFO ][o.e.x.s.a.s.FileRolesStore] [green4] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-10-09T17:18:29,402][DEBUG][o.e.x.s.Security         ] [green4] Using default authentication failure handler
[2020-10-09T17:18:29,667][DEBUG][o.e.x.s.t.f.IPFilter     ] [green4] loaded ip filtering profiles: []
[2020-10-09T17:18:29,956][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:29.956193Z]
[2020-10-09T17:18:30,088][DEBUG][o.e.x.m.p.l.CppLogMessageHandler] [green4] [controller/4914] [CLogger.cc@306] Logger is logging to named pipe /tmp/elasticsearch-15343235074654645061/controller_log_4808
[2020-10-09T17:18:30,090][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [green4] [controller/4914] [Main.cc@110] controller (64 bit): Version 7.5.2 (Build 68f6981dfb8e2d) Copyright (c) 2020 Elasticsearch BV
[2020-10-09T17:18:30,339][DEBUG][o.e.x.t.TransformClusterStateListener] [green4] Created TransformClusterStateListener
[2020-10-09T17:18:30,457][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:30.457775Z]
[2020-10-09T17:18:30,748][DEBUG][o.e.a.ActionModule       ] [green4] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-10-09T17:18:30,791][DEBUG][i.n.u.i.l.InternalLoggerFactory] [green4] Using Log4J as the default logging framework
[2020-10-09T17:18:30,795][DEBUG][i.n.u.i.PlatformDependent0] [green4] -Dio.netty.noUnsafe: true
[2020-10-09T17:18:30,795][DEBUG][i.n.u.i.PlatformDependent0] [green4] sun.misc.Unsafe: unavailable (io.netty.noUnsafe)
[2020-10-09T17:18:30,795][DEBUG][i.n.u.i.PlatformDependent0] [green4] Java version: 13
[2020-10-09T17:18:30,796][DEBUG][i.n.u.i.PlatformDependent0] [green4] java.nio.DirectByteBuffer.<init>(long, int): unavailable
[2020-10-09T17:18:30,797][DEBUG][i.n.u.i.PlatformDependent] [green4] maxDirectMemory: 33216266240 bytes (maybe)
[2020-10-09T17:18:30,797][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.tmpdir: /tmp/elasticsearch-15343235074654645061 (java.io.tmpdir)
[2020-10-09T17:18:30,798][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.bitMode: 64 (sun.arch.data.model)
[2020-10-09T17:18:30,798][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.maxDirectMemory: -1 bytes
[2020-10-09T17:18:30,798][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.uninitializedArrayAllocationThreshold: -1
[2020-10-09T17:18:30,799][DEBUG][i.n.u.i.CleanerJava9     ] [green4] java.nio.ByteBuffer.cleaner(): unavailable
java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
...
[2020-10-09T17:18:30,801][DEBUG][i.n.u.i.PlatformDependent] [green4] -Dio.netty.noPreferDirect: true
[2020-10-09T17:18:30,902][DEBUG][o.e.h.n.Netty4HttpServerTransport] [green4] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000]
[2020-10-09T17:18:30,950][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [green4] using initial hosts [green4, green3, ivory4]
[2020-10-09T17:18:30,958][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:30.958213Z]
[2020-10-09T17:18:30,995][INFO ][o.e.d.DiscoveryModule    ] [green4] using discovery type [zen] and seed hosts providers [settings]
[2020-10-09T17:18:31,224][DEBUG][o.e.i.r.RecoverySettings ] [green4] using max_bytes_per_sec[40mb]
[2020-10-09T17:18:31,458][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:31.458587Z]
[2020-10-09T17:18:31,946][DEBUG][o.e.n.Node               ] [green4] initializing HTTP handlers ...
[2020-10-09T17:18:31,959][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:31.958998Z]
[2020-10-09T17:18:32,258][INFO ][o.e.n.Node               ] [green4] initialized
[2020-10-09T17:18:32,259][INFO ][o.e.n.Node               ] [green4] starting ...
[2020-10-09T17:18:32,260][DEBUG][o.e.l.LicenseService     ] [green4] initializing license state
[2020-10-09T17:18:32,260][DEBUG][o.e.x.m.MonitoringService] [green4] monitoring service is starting
[2020-10-09T17:18:32,260][DEBUG][o.e.x.m.MonitoringService] [green4] monitoring service started
[2020-10-09T17:18:32,265][DEBUG][o.e.x.m.c.CleanerService ] [green4] starting cleaning service
[2020-10-09T17:18:32,269][DEBUG][o.e.x.m.c.CleanerService ] [green4] cleaning service started
[2020-10-09T17:18:32,280][DEBUG][i.n.c.MultithreadEventLoopGroup] [green4] -Dio.netty.eventLoopThreads: 16
...
[2020-10-09T17:18:32,361][DEBUG][o.e.t.n.Netty4Transport  ] [green4] using profile[default], worker_count[16], port[9300-9400], bind_host[[_site_]], publish_host[[]], receive_predictor[64kb->64kb]
[2020-10-09T17:18:32,367][DEBUG][o.e.t.TcpTransport       ] [green4] binding server bootstrap to: [172.30.14.115]
...
[2020-10-09T17:18:32,459][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:32.459444Z]
[2020-10-09T17:18:32,515][TRACE][o.e.g.MetaStateService   ] [green4] generation id [2198] read from [global-2198.st]
[2020-10-09T17:18:32,733][TRACE][o.e.g.MetaDataStateFormat] [green4] found state file: /var/lib/elasticsearch/nodes/0/indices/WxdUXWSjRYiXzCTvlKfz4g/_state/state-17.st
[2020-10-09T17:18:32,742][TRACE][o.e.g.MetaStateService   ] [green4] generation id [17] read from [state-17.st]
...
[2020-10-09T17:18:36,905][TRACE][o.e.g.MetaDataStateFormat] [green4] found state file: /var/lib/elasticsearch/nodes/0/indices/fcam8wFgT1G86-HtjZ96Yw/_state/state-22.st
[2020-10-09T17:18:36,906][TRACE][o.e.g.MetaStateService   ] [green4] generation id [22] read from [state-22.st]
[2020-10-09T17:18:36,955][TRACE][o.e.i.IndexSettings      ] [green4] [metrics-onyxs-s-2020.07.30] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
[2020-10-09T17:18:36,982][TRACE][o.e.i.m.MapperService    ] [green4] [metrics-onyxs-s-2020.07.30] default mapping source[{"_default_":{}}]
[2020-10-09T17:18:36,991][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:36.991672Z]
[2020-10-09T17:18:37,038][TRACE][o.e.i.IndexSettings      ] [green4] [metrics-dino-health-2020.09.10] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
[2020-10-09T17:18:37,039][TRACE][o.e.i.m.MapperService    ] [green4] [metrics-dino-health-2020.09.10] default mapping source[{"_default_":{}}]
[2020-10-09T17:18:37,043][TRACE][o.e.i.IndexSettings      ] [green4] [metrics-luckys-index-2020.07.26] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
...
[2020-10-09T17:18:39,184][TRACE][o.e.i.IndexSettings      ] [green4] [metrics-onyxs-node-2020.07.06] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], deletes_pct_allowed[33.0]
[2020-10-09T17:18:39,184][TRACE][o.e.i.m.MapperService    ] [green4] [metrics-onyxs-node-2020.07.06] default mapping source[{"_default_":{}}]
[2020-10-09T17:18:39,310][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-onyxs-s-2020.07.30/C37J5PdJSzC8Q42bSVReYg]] writing state, reason [upgrade]
[2020-10-09T17:18:39,311][TRACE][o.e.g.MetaDataStateFormat] [green4] cleaned up /var/lib/elasticsearch/nodes/0/indices/C37J5PdJSzC8Q42bSVReYg/state-22.st.tmp
[2020-10-09T17:18:39,570][TRACE][o.e.x.w.t.s.e.TickerScheduleTriggerEngine] [green4] checking jobs [2020-10-09T22:18:39.570668Z]
[2020-10-09T17:18:39,646][TRACE][o.e.g.MetaDataStateFormat] [green4] cleaned up /var/lib/elasticsearch/nodes/0/indices/C37J5PdJSzC8Q42bSVReYg/state-22.st.tmp
[2020-10-09T17:18:39,646][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-onyxs-s-2020.07.30/C37J5PdJSzC8Q42bSVReYg]] state written
[2020-10-09T17:18:39,649][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-dino-health-2020.09.10/Hv7syxfVQES4XUMsyd-YSg]] writing state, reason [upgrade]
...
[2020-10-09T17:19:19,834][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-dino-node-2020.08.13/go0QeqRaQBeyEmuShtK4Pw]] writing state, reason [upgrade]
[2020-10-09T17:19:19,835][TRACE][o.e.g.MetaDataStateFormat] [green4] cleaned up /var/lib/elasticsearch/nodes/0/indices/go0QeqRaQBeyEmuShtK4Pw/state-21.st.tmp
[2020-10-09T17:19:19,859][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [green4] Closing expired connections
[2020-10-09T17:19:19,889][TRACE][o.e.g.MetaDataStateFormat] [green4] cleaned up /var/lib/elasticsearch/nodes/0/indices/go0QeqRaQBeyEmuShtK4Pw/state-21.st.tmp
[2020-10-09T17:19:19,890][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-dino-node-2020.08.13/go0QeqRaQBeyEmuShtK4Pw]] state written
[2020-10-09T17:19:19,890][TRACE][o.e.g.MetaStateService   ] [green4] [[metrics-dino-s-2020.09.09/3JoN572CTPyvgos_nOtsUA]] writing state, reason [upgrade]
[2020-10-09T17:19:19,891][TRACE][o.e.g.MetaDataStateFormat] [green4] cleaned up /var/lib/elasticsearch/nodes/0/indices/3JoN572CTPyvgos_nOtsUA/state-17.st.tmp
[2020-10-09T17:19:19,894][INFO ][o.e.n.Node               ] [green4] stopping ...
[2020-10-09T16:29:05,130][INFO ][o.e.n.Node               ] [green4] stopped
[2020-10-09T16:29:05,130][INFO ][o.e.n.Node               ] [green4] closing ...
[2020-10-09T16:29:05,152][INFO ][o.e.n.Node               ] [green4] closed
[2020-10-09T16:29:05,157][INFO ][o.e.x.m.p.NativeController] [green4] Native controller process has stopped - no new native processes can be started

@mark-vieira mark-vieira added Team:Delivery Meta label for Delivery team and removed Team:Core/Infra Meta label for core/infra team labels Nov 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts Team:Delivery Meta label for Delivery team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants