Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-existent secret key: elastic-internal-pre-stop #7315

Closed
ElSamhaa opened this issue Nov 14, 2023 · 5 comments
Closed

Non-existent secret key: elastic-internal-pre-stop #7315

ElSamhaa opened this issue Nov 14, 2023 · 5 comments

Comments

@ElSamhaa
Copy link

ElSamhaa commented Nov 14, 2023

Bug Report

What did you do?
I've spun a single node eck cluster using the following configuration

What did you expect to see?
The eck instance/cluster should start successfully.

What did you see instead? Under which circumstances?
The elasticsearch pod is reporting this event and can't be started

MountVolume.SetUp failed for volume "elastic-internal-probe-user" : references non-existent secret key: elastic-internal-pre-stop

Environment

  • ECK version: 2.9.0

  • Kubernetes information:

    • On premise
    • Kubernetes distribution: Openshift 4.8 (Kubernetes v1.21.1)
$ oc version

Client Version: 4.14.1
Kustomize Version: v5.0.1
Kubernetes Version: v1.21.1+a620f50
  • Resource definition:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch7
spec:
  auth: {}
  http:
    service:
      metadata:
        creationTimestamp: null
      spec: {}
    tls:
      certificate: {}
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - config:
      node.data: true
      node.ingest: true
      node.master: true
      node.ml: false
      node.store.allow_mmap: false
      xpack.security.authc:
        anonymous:
          authz_exception: true
          roles: superuser
          username: anonymous
    count: 1
    name: default
    podTemplate:
      spec:
        containers:
        - env:
          - name: ES_JAVA_OPTS
            value: -Xms2g -Xmx2g
          name: elasticsearch
          resources:
            limits:
              cpu: 400m
              memory: 3Gi
            requests:
              cpu: 200m
              memory: 2304Mi
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 30Gi
        storageClassName: ocs-storagecluster-ceph-rbd
  podDisruptionBudget:
    metadata:
      creationTimestamp: null
    spec: {}
  transport:
    service:
      metadata:
        creationTimestamp: null
      spec: {}
    tls:
      certificate: {}
  updateStrategy:
    changeBudget: {}
  version: 7.17.11
  • Logs:
    The last event in the pod description:
Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    5s    default-scheduler  Successfully assigned redacted/elasticsearch7-es-default-0 to worker03.redacted
  Warning  FailedMount  5s    kubelet            MountVolume.SetUp failed for volume "elastic-internal-probe-user" : references non-existent secret key: elastic-internal-pre-stop
  • Controller logs:
    Very spammy, basically recurrences of the following:
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.837Z","log.logger":"es-monitoring","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","service.
type":"eck","ecs.version":"1.4.0","iteration":"384134","namespace":"redacted","es-mon_name":"elasticsearch7","took":0.000192213}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.857Z","log.logger":"remoteca-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","
service.type":"eck","ecs.version":"1.4.0","iteration":"383006","namespace":"redacted","es_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.857Z","log.logger":"remoteca-controller","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","se
rvice.type":"eck","ecs.version":"1.4.0","iteration":"383006","namespace":"redacted","es_name":"elasticsearch7","took":0.000284948}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.923Z","log.logger":"elasticsearch-controller","message":"Updating resource","service.version":"2.9.0+f24ccc37","servi
ce.type":"eck","ecs.version":"1.4.0","iteration":"85378","namespace":"redacted","es_name":"elasticsearch7","kind":"Secret","namespace":"redacted","name":"ela
sticsearch7-es-internal-users"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.930Z","log.logger":"remoteca-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","
service.type":"eck","ecs.version":"1.4.0","iteration":"383007","namespace":"redacted","es_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.931Z","log.logger":"remoteca-controller","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","se
rvice.type":"eck","ecs.version":"1.4.0","iteration":"383007","namespace":"redacted","es_name":"elasticsearch7","took":0.000311782}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.931Z","log.logger":"es-monitoring","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","servic
e.type":"eck","ecs.version":"1.4.0","iteration":"384135","namespace":"redacted","es-mon_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.931Z","log.logger":"es-monitoring","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","service.
type":"eck","ecs.version":"1.4.0","iteration":"384135","namespace":"redacted","es-mon_name":"elasticsearch7","took":0.000112892}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.933Z","log.logger":"elasticsearch-controller","message":"Updating resource","service.version":"2.9.0+f24ccc37","servi
ce.type":"eck","ecs.version":"1.4.0","iteration":"85378","namespace":"redacted","es_name":"elasticsearch7","kind":"Secret","namespace":"redacted","name":"ela
sticsearch7-es-xpack-file-realm"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.942Z","log.logger":"elasticsearch-controller","message":"Skipping pod because it has no IP yet","service.version":"2.
9.0+f24ccc37","service.type":"eck","ecs.version":"1.4.0","iteration":"85378","namespace":"redacted","es_name":"elasticsearch7","namespace":"redacted","pod_na
me":"elasticsearch7-es-default-0"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.945Z","log.logger":"elasticsearch-controller","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37
","service.type":"eck","ecs.version":"1.4.0","iteration":"85378","namespace":"redacted","es_name":"elasticsearch7","took":0.150849241}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.945Z","log.logger":"elasticsearch-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc
37","service.type":"eck","ecs.version":"1.4.0","iteration":"85379","namespace":"redacted","es_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.949Z","log.logger":"elasticsearch-controller","message":"Updating resource","service.version":"2.9.0+f24ccc37","servi
ce.type":"eck","ecs.version":"1.4.0","iteration":"85379","namespace":"redacted","es_name":"elasticsearch7","kind":"ConfigMap","namespace":"redacted","name":"
elasticsearch7-es-scripts"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.955Z","log.logger":"es-monitoring","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","servic
e.type":"eck","ecs.version":"1.4.0","iteration":"384136","namespace":"redacted","es-mon_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.955Z","log.logger":"es-monitoring","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","service.
type":"eck","ecs.version":"1.4.0","iteration":"384136","namespace":"redacted","es-mon_name":"elasticsearch7","took":0.000537597}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.957Z","log.logger":"remoteca-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","
service.type":"eck","ecs.version":"1.4.0","iteration":"383008","namespace":"redacted","es_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.958Z","log.logger":"remoteca-controller","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","se
rvice.type":"eck","ecs.version":"1.4.0","iteration":"383008","namespace":"redacted","es_name":"elasticsearch7","took":0.000545798}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.961Z","log.logger":"remoteca-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","
service.type":"eck","ecs.version":"1.4.0","iteration":"383009","namespace":"redacted","es_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.961Z","log.logger":"es-monitoring","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","servic
e.type":"eck","ecs.version":"1.4.0","iteration":"384137","namespace":"redacted","es-mon_name":"elasticsearch7"}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.962Z","log.logger":"remoteca-controller","message":"Ending reconciliation run","service.version":"2.9.0+f24ccc37","se
rvice.type":"eck","ecs.version":"1.4.0","iteration":"383009","namespace":"redacted","es_name":"elasticsearch7","took":0.000784393}
{"log.level":"info","@timestamp":"2023-11-14T12:36:21.962Z","log.logger":"remoteca-controller","message":"Starting reconciliation run","service.version":"2.9.0+f24ccc37","
service.type":"eck","ecs.version":"1.4.0","iteration":"383010","namespace":"redacted","es_name":"elasticsearch7"}
$ oc logs -f -n redacted elastic-operator-0 | jq -r '.message'

Updating resource
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Updating resource
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Skipping pod because it has no IP yet
Ending reconciliation run
Starting reconciliation run
Updating resource
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Updating resource
Updating resource
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Skipping pod because it has no IP yet
Ending reconciliation run
Starting reconciliation run
Updating resource
Starting reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Updating resource
Updating resource
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Skipping pod because it has no IP yet
Ending reconciliation run
Starting reconciliation run
Updating resource
Starting reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Ending reconciliation run
Starting reconciliation run
Ending reconciliation run
Starting reconciliation run
@botelastic botelastic bot added the triage label Nov 14, 2023
@barkbay
Copy link
Contributor

barkbay commented Nov 15, 2023

I'm not able to reproduce, could you check if:

  • If there is any log message in the operator logs with a reference to a Secret named elasticsearch7-es-internal-users, something like:

TIMESTAMP INFO elasticsearch-controller Creating resource {"service.version": "0.0.0-SNAPSHOT+00000000", "iteration": "XX", "namespace": "default", "es_name": "elasticsearch7", "kind": "Secret", "namespace": "default", "name": "elasticsearch7-es-internal-users"}

  • If the aforementioned Secret does exist with the following keys:
apiVersion: v1
kind: Secret
metadata:
  labels:
    common.k8s.elastic.co/type: elasticsearch
    eck.k8s.elastic.co/credentials: "true"
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch7
  name: elasticsearch7-es-internal-users
data:
  elastic-internal: REDACTED
  elastic-internal-monitoring: REDACTED
  elastic-internal-pre-stop: REDACTED
  elastic-internal-probe: REDACTED

@ElSamhaa
Copy link
Author

ElSamhaa commented Nov 15, 2023

  1. The logs are rotated. I'll recreate the elasticsearch cr and update you, which I already tried doing many times and got consistent repro of the bug

  2. no elastic-internal-pre-stop key in the secret; i.e this is what I'm getting:

apiVersion: v1
data:
  elastic-internal: redacted
  elastic-internal-monitoring: redacted
  elastic-internal-probe: redacted
kind: Secret
metadata:
  labels:
    common.k8s.elastic.co/type: elasticsearch
    eck.k8s.elastic.co/credentials: "true"
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch7
  name: elasticsearch7-es-internal-users
  namespace: redacted
type: Opaque

Update: Also, manually patching the secret with a random value of the missing key doesn't seem to have any effect. In fact, the eck controller just overwrites the secret and remove the elastic-internal-pre-stop key.

@ElSamhaa
Copy link
Author

@barkbay I've tried to remove the elasticsearch cr and deleted the only controller pod in the replica before reapplying the cr again, and this is the messages field of the elasticsearch pod logs:

$ cat logs-saved-to-file | jq -r .message

readiness probe failed
readiness probe failed
version[7.17.11], pid[7], build[default/docker/eeedb98c60326ea3d46caef960fb4c77958fb885/2023-06-23T05:33:12.261262042Z], OS[Linux/4.18.0-305.19.1.el8_4.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/20.0.1/20.0.1+9-29]
JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5785045208094565215, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms2g, -Xmx2g, -XX:MaxDirectMemorySize=1073741824,-XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]
readiness probe failed
readiness probe failed
readiness probe failed
loaded module [aggs-matrix-stats]
loaded module [analysis-common]
loaded module [constant-keyword]
loaded module [frozen-indices]
loaded module [ingest-common]
loaded module [ingest-geoip]
loaded module [ingest-user-agent]
loaded module [kibana]
loaded module [lang-expression]
loaded module [lang-mustache]
loaded module [lang-painless]
loaded module [legacy-geo]
loaded module [mapper-extras]
loaded module [mapper-version]
loaded module [parent-join]
loaded module [percolator]
loaded module [rank-eval]
loaded module [reindex]
loaded module [repositories-metering-api]
loaded module [repository-encrypted]
loaded module [repository-url]
loaded module [runtime-fields-common]
loaded module [search-business-rules]
loaded module [searchable-snapshots]
loaded module [snapshot-repo-test-kit]
loaded module [spatial]
loaded module [transform]
loaded module [transport-netty4]
loaded module [unsigned-long]
loaded module [vector-tile]
loaded module [vectors]
loaded module [wildcard]
loaded module [x-pack-aggregate-metric]
loaded module [x-pack-analytics]
loaded module [x-pack-async]
loaded module [x-pack-async-search]
loaded module [x-pack-autoscaling]
loaded module [x-pack-ccr]
loaded module [x-pack-core]
loaded module [x-pack-data-streams]
loaded module [x-pack-deprecation]
loaded module [x-pack-enrich]
loaded module [x-pack-eql]
loaded module [x-pack-fleet]
loaded module [x-pack-graph]
loaded module [x-pack-identity-provider]
loaded module [x-pack-ilm]
loaded module [x-pack-logstash]
loaded module [x-pack-ml]
loaded module [x-pack-monitoring]
loaded module [x-pack-ql]
loaded module [x-pack-rollup]
loaded module [x-pack-security]
loaded module [x-pack-shutdown]
loaded module [x-pack-sql]
loaded module [x-pack-stack]
loaded module [x-pack-text-structure]
loaded module [x-pack-voting-only-node]
loaded module [x-pack-watcher]
no plugins loaded
[node.ml] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/rbd1)]], net usable_space [29.3gb], net total_space [29.4gb], types [ext4]
heap size [2gb], compressed ordinary object pointers [true]
[node.master] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[node.ingest] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
node name [elasticsearch7-es-default-0], node ID [56P9pGKBQEqZT-B5vXliTw], cluster name [elasticsearch7], roles [transform, data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]
legacy role settings [node.data, node.ingest, node.master, node.ml] are deprecated, use [node.roles=[transform, data_frozen, master, remote_cluster_client, data, data_content, data_hot, data_warm, data_cold, ingest]]
readiness probe failed
readiness probe failed
readiness probe failed
readiness probe failed
searches will not be routed based on awareness attributes starting in version 8.0.0; to opt into this behaviour now please set the system property [es.search.ignore_awareness_attributes] to [true]
readiness probe failed
readiness probe failed
readiness probe failed
readiness probe failed
[controller/195] [Main.cc@122] controller (64 bit): Version 7.17.11 (Build 4686411755665b) Copyright (c) 2023 Elasticsearch BV
readiness probe failed
license mode is [trial], currently licensed security realms are [reserved/reserved,file/file1,native/native1]
parsed [50] roles from file [/usr/share/elasticsearch/config/roles.yml]
readiness probe failed
initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/usr/share/elasticsearch/config/ingest-geoip] for changes
initialized database registry, using geoip-databases directory [/tmp/elasticsearch-5785045208094565215/geoip-databases/56P9pGKBQEqZT-B5vXliTw]
readiness probe failed
creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
using discovery type [zen] and seed hosts providers [settings, file]
readiness probe failed
gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
readiness probe failed
initialized
starting ...
persistent cache index loaded
deprecation component started
publish_address {10.128.2.251:9300}, bound_addresses {[::]:9300}
creating template [.monitoring-alerts-7] with version [7]
creating template [.monitoring-es] with version [7]
creating template [.monitoring-kibana] with version [7]
creating template [.monitoring-logstash] with version [7]
creating template [.monitoring-beats] with version [7]
readiness probe failed
bound or publishing to a non-loopback address, enforcing bootstrap checks
setting initial configuration to VotingConfiguration{56P9pGKBQEqZT-B5vXliTw}
elected-as-master ([1] nodes joined)[{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw}]}
cluster UUID set to [IolPXe9oScW8KM3POnPkiQ]
master node changed {previous [], current [{elasticsearch7-es-default-0}{56P9pGKBQEqZT-B5vXliTw}{TQcYBCQ2TaWnWoYPsca2nQ}{10.128.2.251}{10.128.2.251:9300}{cdfhimrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
publish_address {elasticsearch7-es-default-0.elasticsearch7-es-default.redacted.svc/10.128.2.251:9200}, bound_addresses {[::]:9200}
started
recovered [0] indices into cluster_state
adding index template [.ml-stats] for index patterns [.ml-stats-*]
adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
adding index template [.ml-state] for index patterns [.ml-state*]
adding component template [synthetics-settings]
adding component template [logs-mappings]
adding component template [metrics-mappings]
adding component template [data-streams-mappings]
adding component template [logs-settings]
adding component template [synthetics-mappings]
adding component template [metrics-settings]
adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
adding index template [ilm-history] for index patterns [ilm-history-5*]
adding index template [.slm-history] for index patterns [.slm-history-5*]
adding component template [.deprecation-indexing-settings]
adding component template [.deprecation-indexing-mappings]
adding index template [logs] for index patterns [logs-*-*]
adding index template [synthetics] for index patterns [synthetics-*-*]
adding index template [metrics] for index patterns [metrics-*-*]
adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
adding index lifecycle policy [ml-size-based-ilm-policy]
adding index lifecycle policy [synthetics]
adding index lifecycle policy [metrics]
adding index lifecycle policy [logs]
adding index lifecycle policy [90-days-default]
adding index lifecycle policy [365-days-default]
adding index lifecycle policy [180-days-default]
adding index lifecycle policy [7-days-default]
adding index lifecycle policy [30-days-default]
adding index lifecycle policy [watch-history-ilm-policy]
adding index lifecycle policy [ilm-history-ilm-policy]
adding index lifecycle policy [slm-history-ilm-policy]
adding index lifecycle policy [.deprecation-indexing-ilm-policy]
adding index lifecycle policy [.fleet-actions-results-ilm-policy]
updating geoip databases
fetching geoip databases overview from [https://geoip.elastic.co/v1/database?elastic_geoip_service_tos=agree]
license [a8eb17dd-2505-4a49-99b0-c97e0dd009ea] mode [basic] - valid
license mode is [basic], currently licensed security realms are [reserved/reserved,file/file1,native/native1]
Active license is now [BASIC]; Security is enabled
[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] creating index, cause [initialize_data_stream], templates [.deprecation-indexing-template], shards [1]/[1]
updating number_of_replicas to [0] for indices [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001]
adding data stream [.logs-deprecation.elasticsearch-default] with write index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001], backing indices [], and aliases []
creating shutdown record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857877]}
Starting node shutdown sequence for ML
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [.deprecation-indexing-ilm-policy]
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001][0]]]).
updating existing shutdown record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857877]} with new record {nodeId=[56P9pGKBQEqZT-B5vXliTw], type=[RESTART], reason=[1682857892]}
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [.deprecation-indexing-ilm-policy]
[.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001/ZnPJUAZlQIWfYXE0LaQs0g] update_mapping [_doc]
moving index [.ds-.logs-deprecation.elasticsearch-default-2023.11.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [.deprecation-indexing-ilm-policy]
[.ds-ilm-history-5-2023.11.14-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2023.11.14-000001], backing indices [], and aliases []
moving index [.ds-ilm-history-5-2023.11.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
exception during geoip databases update
moving index [.ds-ilm-history-5-2023.11.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2023.11.14-000001][0]]]).
moving index [.ds-ilm-history-5-2023.11.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
stopping ...
stopping watch service, reason [shutdown initiated]
[controller/195] [Main.cc@174] ML controller exiting
Native controller process has stopped - no new native processes can be started
watcher has stopped and shutdown
readiness probe failed
stopped
closing ...
closed

@thbkrkr
Copy link
Contributor

thbkrkr commented Nov 16, 2023

#7315 (comment) suggests looking at the operator pod logs not the Elasticsearch pod logs.

elastic-internal-pre-stop user was introduced in 2.8 (#6544). Could it be possible that you have an older version of the operator still running?

@ElSamhaa
Copy link
Author

The issue turned out to be caused by running 2 different versions of the ECK operator simultaneously. Seems they bot contended for managing the es CR. Deleting one of them resolved the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants