New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-existent secret key: elastic-internal-pre-stop #7315
Comments
I'm not able to reproduce, could you check if:
apiVersion: v1
kind: Secret
metadata:
labels:
common.k8s.elastic.co/type: elasticsearch
eck.k8s.elastic.co/credentials: "true"
elasticsearch.k8s.elastic.co/cluster-name: elasticsearch7
name: elasticsearch7-es-internal-users
data:
elastic-internal: REDACTED
elastic-internal-monitoring: REDACTED
elastic-internal-pre-stop: REDACTED
elastic-internal-probe: REDACTED |
apiVersion: v1
data:
elastic-internal: redacted
elastic-internal-monitoring: redacted
elastic-internal-probe: redacted
kind: Secret
metadata:
labels:
common.k8s.elastic.co/type: elasticsearch
eck.k8s.elastic.co/credentials: "true"
elasticsearch.k8s.elastic.co/cluster-name: elasticsearch7
name: elasticsearch7-es-internal-users
namespace: redacted
type: Opaque Update: Also, manually patching the secret with a random value of the missing key doesn't seem to have any effect. In fact, the eck controller just overwrites the secret and remove the |
@barkbay I've tried to remove the $ cat logs-saved-to-file | jq -r .message
readiness probe failed
readiness probe failed
version[7.17.11], pid[7], build[default/docker/eeedb98c60326ea3d46caef960fb4c77958fb885/2023-06-23T05:33:12.261262042Z], OS[Linux/4.18.0-305.19.1.el8_4.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/20.0.1/20.0.1+9-29]
JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5785045208094565215, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms2g, -Xmx2g, -XX:MaxDirectMemorySize=1073741824,-XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]
readiness probe failed
readiness probe failed
readiness probe failed
loaded module [aggs-matrix-stats]
loaded module [analysis-common]
loaded module [constant-keyword]
loaded module [frozen-indices]
loaded module [ingest-common]
loaded module [ingest-geoip]
loaded module [ingest-user-agent]
loaded module [kibana]
loaded module [lang-expression]
loaded module [lang-mustache]
loaded module [lang-painless]
loaded module [legacy-geo]
loaded module [mapper-extras]
loaded module [mapper-version]
loaded module [parent-join]
loaded module [percolator]
loaded module [rank-eval]
loaded module [reindex]
loaded module [repositories-metering-api]
loaded module [repository-encrypted]
loaded module [repository-url]
loaded module [runtime-fields-common]
loaded module [search-business-rules]
loaded module [searchable-snapshots]
loaded module [snapshot-repo-test-kit]
loaded module [spatial]
loaded module [transform]
loaded module [transport-netty4]
loaded module [unsigned-long]
loaded module [vector-tile]
loaded module [vectors]
loaded module [wildcard]
loaded module [x-pack-aggregate-metric]
loaded module [x-pack-analytics]
loaded module [x-pack-async]
loaded module [x-pack-async-search]
loaded module [x-pack-autoscaling]
loaded module [x-pack-ccr]
loaded module [x-pack-core]
loaded module [x-pack-data-streams]
loaded module [x-pack-deprecation]
loaded module [x-pack-enrich]
loaded module [x-pack-eql]
loaded module [x-pack-fleet]
loaded module [x-pack-graph]
loaded module [x-pack-identity-provider]
loaded module [x-pack-ilm]
loaded module [x-pack-logstash]
loaded module [x-pack-ml]
loaded module [x-pack-monitoring]
loaded module [x-pack-ql]
loaded module [x-pack-rollup]
loaded module [x-pack-security]
loaded module [x-pack-shutdown]
loaded module [x-pack-sql]
loaded module [x-pack-stack]
loaded module [x-pack-text-structure]
loaded module [x-pack-voting-only-node]
loaded module [x-pack-watcher]
no plugins loaded
[node.ml] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/rbd1)]], net usable_space [29.3gb], net total_space [29.4gb], types [ext4]
heap size [2gb], compressed ordinary object pointers [true]
[node.master] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[node.ingest] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
node name [elasticsearch7-es-default-0], node ID [56P9pGKBQEqZT-B5vXliTw], cluster name [elasticsearch7], roles [transform, data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]
legacy role settings [node.data, node.ingest, node.master, node.ml] are deprecated, use [node.roles=[transform, data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]]
readiness probe failed
readiness probe failed
readiness probe failed
readiness probe failed
searches will not be routed based on awareness attributes starting in version 8.0.0; to opt into this behaviour now please set the system property [es.search.ignore_awareness_attributes] to [true]
readiness probe failed
readiness probe failed
readiness probe failed
readiness probe failed
[controller/195] [Main.cc@122] controller (64 bit): Version 7.17.11 (Build 4686411755665b) Copyright (c) 2023 Elasticsearch BV
readiness probe failed
license mode is [trial], currently licensed security realms are [reserved/reserved,file/file1,native/native1]
parsed [50] roles from file [/usr/share/elasticsearch/config/roles.yml]
readiness probe failed
initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/usr/share/elasticsearch/config/ingest-geoip] for changes
initialized database registry, using geoip-databases directory [/tmp/elasticsearch-5785045208094565215/geoip-databases/56P9pGKBQEqZT-B5vXliTw]
readiness probe failed
creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
using discovery type [zen] and seed hosts providers [settings, file]
readiness probe failed
gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
readiness probe failed
initialized
starting ...
persistent cache index loaded
deprecation component started
publish_address {10.128.2.251:9300}, bound_addresses {[::]:9300}
creating template [.monitoring-alerts-7] with version [7]
creating template [.monitoring-es] with version [7]
creating template [.monitoring-kibana] with version [7]
creating template [.monitoring-logstash] with version [7]
creating template [.monitoring-beats] with version [7]
readiness probe failed
bound or publishing to a non-loopback address, enforcing bootstrap checks
setting initial configuration to VotingConfiguration{56P9pGKBQEqZT-B5vXliTw}
elected-as-master ([1] nodes joined)[{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw}]}
cluster UUID set to [IolPXe9oScW8KM3POnPkiQ]
master node changed {previous [], current [{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
publish_address {elasticsearch7-es-default-0.elasticsearch7-es-default.redacted.svc/10.128.2.251:9200}, bound_addresses {[::]:9200}
started
recovered [0] indices into cluster_state
adding index template [.ml-stats] for index patterns [.ml-stats-*]
adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
adding index template [.ml-state] for index patterns [.ml-state*]
adding component template [synthetics-settings]
adding component template [logs-mappings]
adding component template [metrics-mappings]
adding component template [data-streams-mappings]
adding component template [logs-settings]
adding component template [synthetics-mappings]
adding component template [metrics-settings]
adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
adding index template [ilm-history] for index patterns [ilm-history-5*]
adding index template [.slm-history] for index patterns [.slm-history-5*]
adding component template [.deprecation-indexing-settings]
adding component template [.deprecation-indexing-mappings]
adding index template [logs] for index patterns [logs-*-*]
adding index template [synthetics] for index patterns [synthetics-*-*]
adding index template [metrics] for index patterns [metrics-*-*]
adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
adding index lifecycle policy [ml-size-based-ilm-policy]
adding index lifecycle policy [synthetics]
adding index lifecycle policy [metrics]
adding index lifecycle policy [logs]
adding index lifecycle policy [90-days-default]
adding index lifecycle policy [365-days-default]
adding index lifecycle policy [180-days-default]
adding index lifecycle policy [7-days-default]
adding index lifecycle policy [30-days-default]
adding index lifecycle policy [watch-history-ilm-policy]
adding index lifecycle policy [ilm-history-ilm-policy]
adding index lifecycle policy [slm-history-ilm-policy]
adding index lifecycle policy [.deprecation-indexing-ilm-policy]
adding index lifecycle policy [.fleet-actions-results-ilm-policy]
updating geoip databases
fetching geoip databases overview from [https://geoip.elastic.co/v1/database?elastic_geoip_service_tos=agree]
license [a8eb17dd-2505-4a49-99b0-c97e0dd009ea] mode [basic] - valid
license mode is [basic], currently licensed security realms are [reserved/reserved,file/file1,native/native1]
Active license is now [BASIC]; Security is enabled
[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] creating index, cause [initialize_data_stream], templates [.deprecation-indexing-template], shards [1]/[1]
updating number_of_replicas to [0] for indices [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001]
adding data stream [.logs-deprecation.elasticsearch-default] with write index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001], backing indices [], and aliases []
creating shutdown record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857877]}
Starting node shutdown sequence for ML
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [.deprecation-indexing-ilm-policy]
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001][0]]]).
updating existing shutdown record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857877]} with new record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857892]}
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [.deprecation-indexing-ilm-policy]
[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001/ZnPJUAZlQIWfYXE0LaQs0g] update_mapping [_doc]
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [.deprecation-indexing-ilm-policy]
[.ds-ilm-history-5-2023.11.14-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2023.11.14-000001], backing indices [], and aliases []
moving index [.ds-ilm-history-5-2023.11.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
exception during geoip databases update
moving index [.ds-ilm-history-5-2023.11.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2023.11.14-000001][0]]]).
moving index [.ds-ilm-history-5-2023.11.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
stopping ...
stopping watch service, reason [shutdown initiated]
[controller/195] [Main.cc@174] ML controller exiting
Native controller process has stopped - no new native processes can be started
watcher has stopped and shutdown
readiness probe failed
stopped
closing ...
closed
|
#7315 (comment) suggests looking at the operator pod logs not the Elasticsearch pod logs.
|
The issue turned out to be caused by running 2 different versions of the ECK operator simultaneously. Seems they bot contended for managing the |
Bug Report
What did you do?
I've spun a single node eck cluster using the following configuration
What did you expect to see?
The eck instance/cluster should start successfully.
What did you see instead? Under which circumstances?
The elasticsearch pod is reporting this event and can't be started
Environment
ECK version: 2.9.0
Kubernetes information:
The last event in the pod description:
Very spammy, basically recurrences of the following:
The text was updated successfully, but these errors were encountered: