-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Rewrite elasticsearch multi example to use child charts. #22
Conversation
I suppose in this example, we could set
It'd be identical to the the upgrade process for current code (in master) which has two files (data + master) and separate helm releases, probably? Edit: I just tried this, and it works as expected (set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM apart from needing a rebase to fix the integration tests and pull in the latest changes and version from master. The tests are currently failing because Google dropped support for GKE 1.9 clusters so the build is failing when trying to create that cluster.
Apart from the needed rebase I love everything about this. This is a much nicer way to handle clusters with multiple node groups in a "helm native" way. Thanks a lot for trying this out, it has been on the todo list for too long and I'm glad to see it is really simple and easy to set up!
This approach has made me realise something else that could be changed to improve managing a full stack with helm charts using this method. In this example you are setting masterService: "multi-master"
. If you also wanted to add Kibana to this setup you would now also need to add elasticsearchURL: "http://multi-master:9200"
which would be a bit silly.
So I think this means we should standardise on the connection variables for Elasticsearch so that they can be reused for Elasticsearch, Kibana, Logstash, Beats + whatever else in the future. I'll create a separate issue for it, to keep this PR clean.
Indeed! I'll share my stack chart with you after new years which includes Kibana. |
e92b7e2
to
03c5d6f
Compare
Rebased! |
CI tests are failing because helm isn't initialised.
I think adding the |
Doesn't this go against the original design of the Chart? |
This still functions identically to the previous example. Instead of adding a new release and a new upgrade command you can instead add a new node block.
To then demote the old nodes you would then bump the replicas down one at a time, or manually move the shards off them then remove the node block. While this is the same in function the terminology in the readme indeed no longer matches. I think we can replace the terminology "release" with "install". The important piece being that the chart itself does not attempt to handle multiple groups of node itself. Upgrading the master group first would involve bumping the The motivation for doing it this way was to remove code duplication when specifying configuration that is shared between multiple node groups. This could also be solved by including multiple values files. One advantage of using this method is that it gives you a single version for your cluster. If you want to roll back the previous change there is only one version to rollback. Rather than manually needing to figure out the order that you made the changes to the separate releases. @rendhalver If you do find this example confusing though maybe it is a sign that we shouldn't merge this. Or maybe instead have it as a separate example of how to manage multiple node groups in a single release. |
It's not confusing I understand what you are trying to do. I do admit I didn't read what the code does before I put in my comment. I think having it as a separate example is a better idea. If we start enforcing opinions then we are at risk of breaking the un-opinionated nature of the Chart. |
On a side note I copied this Chart into our local repo and we have used it to deploy 4 new clusters and it works really well. |
03c5d6f
to
5244c4f
Compare
@rendhalver I have some other ideas to make this chart style even more useful (so that settings can be inherited across parent->child chart values). Hopefully will have a PR to demo it this week. After that, we can discuss any future support for this style. And thank you for testing! :) |
That sounds pretty cool! |
@Crazybus I think this is ready for another round of reviews. CI tests failure seems unrelated (timeout waiting for the GKE cluster to come online) |
The parent/child chart relationship is roughly: inheritance -> secure-elasticsearch -> elasticsearch The objective is to demonstrate a parent chart (secure-elasticsearch) which sets broad business-specific defaults for Elasticsearch, and this chart is then used to deploy master and data nodes, inheriting those settings. This example covers a few areas: 1) Having a child chart (secure-elasticsearch) add some baseline settings for the cluster. 2) A parent chart to inherit and amend the settings of the child secure-elasticsearch chart. 3) Using chart aliases to deploy master and data nodes independently Same result as multi in PR #22, but instead of using yaml anchors, this uses helm child charts. 4) Add a job which deploy the license payload. 5) Add a job for deploying cluster and index settings. This can be useful for setting cluster-wide things like concurrent_recoveries, or for index-level settings such as node_left recovery delay. These settings live along-side the node settings intended for easier reading. 6) (From previous commit) Elasticsearch node settings (elasticsearch.yml) is now specified by a structured value instead of a text block. The `Makefile` does a few things to help make this testable outside of Elastic's infrastructure. Mainly, all secrets and key material are generated by `make secrets`. This is instead of using the `vault read` method that other examples are using. Note: The `secure-elasticsearch` chart is in a parent directory because for some reason helm doesn't work if it were in ./charts/secure-elasticsearch due to some behavior in helm itself: helm/helm#2887
4db3db4
to
5fffc89
Compare
Any update? Do you need help on this one? I can help bringing a multi example with several node groups (hot/warm/cold) |
# base settings for both node types (data + master) | ||
es-defaults: &defaults | ||
clusterName: "multi" | ||
masterService: "multi-master" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need this, as the masterService is computed from cluster name and not group name.
ingest: "false" | ||
data: "false" | ||
|
||
# To define common settings, it may be useful to use YAML anchors to use the same |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In helm 2.14.3, the anchor has to be defined BEFORE using it, so moving this part on top of the file works.
Pleas do! (In a separate PR).
I agree with @rendhalver and think that we should add this is a example to the existing multi one that is being used for testing. |
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. To track this PR (even if closed), please open a corresponding issue if one does not already exist. |
5 months later, PR at #452 |
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. To track this PR (even if closed), please open a corresponding issue if one does not already exist. |
This PR has been automatically closed because it has not had recent activity since being marked as stale. Please reopen when work resumes. |
As a newbie to helm, I found child charts interesting. This change makes the
multi
example now use a single release with child charts instead of two separate helm releases.In practice, I don't know if this is the best route, because we'll want to upgrade masters before upgrading data nodes, for example, so this PR is mostly food-for-thought.
Thoughts? :)