-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[elasticsearch] Allow for configurable health status value in readinessProbe #60
Comments
I´ve run in the same issue at this moment. It would be nice to configure the required level within values. "green" can be default to be backwards compatible
|
Doing that would add
I'll work on a PR using option 3, for now |
Actually option 3 above would be a breaking change for anyone that has already implemented values for the readinessProbe. The safest option is 1. |
thank you for your feedback. after thinking about, my first suggestion was not the best one. |
@tgruenert - see #62 :) |
My reading of the readinessProbe is that it checks for green once and once green only checks the node's root path I know it doesn't require green all the time because my cluster goes yellow (normal) during maintenance and none of my Elasticsearch nodes fail their readinessProbe. |
This issue was raised related to the coordinator node issue and this comment - #58 (comment). I found this issue because our data nodes restarted and some shards were left unnasigned. In the interim, our coordinator nodes restarted, and, because the cluster was not green, the readiness probe failed, meaning kibana, which was pointing at the coordinator node, could not connect to es, and we could not do index management through the UI to get the cluster green. In conjunction with defining how coordinator nodes will work, this is to allow those types of nodes (i.e non mutating) to still be accessible and become ready in this scenario |
Ahh. Makes sense. I think the broader problem is that the readinessProbe currently evaluates a cluster-wide value (cluster state color) when readinessProbes should probably focus on individual pod health, not the health of the whole Elasticsearch cluster. A red cluster state doesn't imply the cluster can't serve traffic, but that's what the readinessProbe is inferring today. Let's think about a broader per-node health check that doesn't look at the cluster health? |
agree
There is no real Node Health API for individual nodes, and I think it's still valid to check that the instance is part of the cluster before it's deemed ready. The closest I can think of is the Node Stats API, but that (given the query itself is valid) always returns success, and there is not "wait_for" parameters, so using it would involve a further step in interpreting the json response.
and with that, to know you are part of a cluster, it's probably better to query against a service that is not localhost. FYI, the PR for the original issue that I raised has been merged. Once this discussion ends I am happy to look at this again |
This would be great, but unfortunately statefulsets use the readinessProbe to also determine whether or not the next node in the statefulset can be upgraded. By using a purely node level health check here it would mean that rolling upgrades of Elasticsearch or Kubernetes would take down the Elasticsearch cluster. If anyone has a better idea of how to accomplish this in a statefulset without using a "cluster aware on startup" readinessProbe I would love to hear it!
Just to clarify what happened here. Did all of these nodes get restarted at the same time? Or was this a controlled rolling upgrade scenario where Kubernetes was doing a planned rolling restart? If all nodes do get restarted at exactly the same time it is indeed a shame that the readinessProbe will mean that no traffic is served until the cluster is green. So having coordinating nodes that always serve traffic does make sense in that situation. |
Closing this as it was fixed by @rwittrick in #62. Thank you so much for opening and fixing the issue :) |
In the elasticsearch statefulset readinessProbe command, the cluster health status has a hardcoded value of
green
. It's not always the case that, as with coordinator nodes, green is the minimum acceptable state for cluster health.The text was updated successfully, but these errors were encountered: