New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm v1.15 blog post #15026
kubeadm v1.15 blog post #15026
Conversation
Deploy preview for kubernetes-io-master-staging ready! Built with commit 0a2ad28 https://deploy-preview-15026--kubernetes-io-master-staging.netlify.com |
to quickly and easily bootstrap minimum viable clusters that are fully compliant with | ||
[Certified Kubernetes](https://github.com/cncf/k8s-conformance/blob/master/terms-conditions/Certified_Kubernetes_Terms.md) guidelines. | ||
It’s been under active development by [SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) | ||
since 2016 and we graduated it from beta to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can rephrase it to and graduated from beta to
[Certified Kubernetes](https://github.com/cncf/k8s-conformance/blob/master/terms-conditions/Certified_Kubernetes_Terms.md) guidelines. | ||
It’s been under active development by [SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) | ||
since 2016 and we graduated it from beta to | ||
[stable and generally available (GA) in end of 2018](https://kubernetes.io/blog/2018/12/04 production-ready-kubernetes-cluster-creation-with-kubeadm/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe generally available (GA) at the end of 2018
The core of the kubeadm interface is quite simple: new control plane nodes are created by running | ||
<strong><code>kubeadm init</code></strong>, worker nodes are joined to the control plane by running | ||
<strong><code>kubeadm join</code></strong>. Also included are common utilities for managing already bootstrapped | ||
clusters, such as control plane upgrades and token and certificate renewal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe control plane upgrades, token and certificate renewal
?
|
||
* Infrastructure provisioning | ||
* Third-party networking | ||
* Non-critical add-ons for e.g. monitoring, logging, and visualization |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe remove for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a small nit across the doc that isn't 100% necessary -- but I'd replace links that reference other documentation on https://kubernetes.io with relative links from the root so that the post will link to the correct doc when it becomes versioned. (e.g. https://v1-14.docs.kubernetes.io/)
Example:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
becomes
/docs/setup/independent/create-cluster-kubeadm/
|
||
### 2019 plans | ||
|
||
We are focusing our efforts this year around graduating the configuration file format to GA (`kubeadm.k8s.io/v1`)`, graduating this super-easy High Availability flow to stable, and providing better tools around rotating certificates needed for running the cluster automatically. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this year
can be taken out since the header is 2019 plans
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Informal feedback.
I've made a number of suggestions. Feel free to accept / ignore these at your discretion @luxas
user-friendly way. kubeadm's scope is limited to the local machine’s filesystem and the Kubernetes API, and it is | ||
intended to be a _composable building block for higher-level tools_. | ||
|
||
The core of the kubeadm interface is quite simple: new control plane nodes are created by running |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new control plane nodes are created by running
You might want to address the reader as “you” (see https://kubernetes.io/docs/contribute/style/style-guide/#address-the-reader-as-you)
|
||
* **Automated certificate transfer**. kubeadm implements an automatic certificate copy feature to automate the distribution of all the certificate authorities/keys that must be shared across all the control-planes nodes in order to get your cluster to work. This feature can be activated by passing `--upload-certs` to `kubeadm init`; see [configure and deploy an HA control plane](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) for more details. This is an explicit opt-in feature, you can also distribute the certificates manually in your preferred way. \ | ||
|
||
* **Dynamically-growing etcd cluster**. In case you are not providing an external etcd cluster, kubeadm automatically generates a static pod hosted etcd member on each control-plane node; all the etcd members are joined in a “stacked” etcd cluster that grows together with your high availability control-plane \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* **Dynamically-growing etcd cluster**. In case you are not providing an external etcd cluster, kubeadm automatically generates a static pod hosted etcd member on each control-plane node; all the etcd members are joined in a “stacked” etcd cluster that grows together with your high availability control-plane \ | |
* **Dynamically-growing etcd cluster**. When you're not using an external etcd cluster, kubeadm automatically adds a new etcd member on each control-plane node, running as a static pod. All the etcd members are joined in a “stacked” etcd cluster that grows together with your high availability control-plane. \ |
|
||
* **Dynamically-growing etcd cluster**. In case you are not providing an external etcd cluster, kubeadm automatically generates a static pod hosted etcd member on each control-plane node; all the etcd members are joined in a “stacked” etcd cluster that grows together with your high availability control-plane \ | ||
|
||
* **Concurrent joining**. Similarly to what already implemented for worker nodes, it is possible to join control-plane nodes whenever, in any order; joining control-plane nodes in parallel is supported as well. \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* **Concurrent joining**. Similarly to what already implemented for worker nodes, it is possible to join control-plane nodes whenever, in any order; joining control-plane nodes in parallel is supported as well. \ | |
* **Concurrent joining**. Similarly to what already implemented for worker nodes, you join control-plane nodes whenever, in any order, or even in parallel. \ |
You can argue that there are hardly two Kubernetes clusters that are configured equally, and hence there is a need to customize how the cluster is set up depending on the environment. One way of configuring a component is via flags. However, this has some scalability limitations: | ||
|
||
|
||
* **Hard to maintain.** When $component’s flag set grows over 30+ flags, configuring it becomes really painful |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* **Hard to maintain.** When $component’s flag set grows over 30+ flags, configuring it becomes really painful | |
* **Hard to maintain.** When a component’s flag set grows over 30+ flags, configuring it becomes really painful. |
|
||
Some handy links if you want to start contribute: | ||
|
||
* We have a recorded [New Contributor Onboarding](https://www.youtube.com/watch?v=Bof9aveB3rA) session on YouTube. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* We have a recorded [New Contributor Onboarding](https://www.youtube.com/watch?v=Bof9aveB3rA) session on YouTube. | |
* You can watch the SIG Cluster Lifecycle [New Contributor Onboarding](https://www.youtube.com/watch?v=Bof9aveB3rA) session on YouTube. |
* We have a recorded [New Contributor Onboarding](https://www.youtube.com/watch?v=Bof9aveB3rA) session on YouTube. | ||
* Look out for “good first issue”, “help wanted” and “sig/cluster-lifecycle” labeled issues in our repositories | ||
(e.g. [kubernetes/kubeadm](https://github.com/kubernetes/kubeadm)) | ||
* Join **#sig-cluster-lifecycle**, **#kubeadm**, **#cluster-api**, **#minikube**, **#kind**, etc. in Slack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which Slack organisation? (I assume it's the main Kubernetes Slack).
Is it worth linking there?
@@ -0,0 +1,202 @@ | |||
# Automated High Availability in kubeadm v1.15: Batteries Included But Swappable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This blog post needs usual front matter. Have a look at other blog posts for an example.
This release wouldn’t have been possible without the help of the great people that have been contributing to SIG Cluster Lifecycle and kubeadm. We would like to thank all the kubeadm contributors and companies making it possible for their developers to work on Kubernetes! | ||
|
||
|
||
## Written by: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually, Kubernetes blog posts list the author(s) just below the front matter.
Signed-off-by: Jorge O. Castro <jorgec@vmware.com>
Signed-off-by: Jorge O. Castro <jorgec@vmware.com>
Ok I've addressed all the review comments, updated the logo, and added the front matter, PTAL. |
I also updated some minor things on it now (e.g. addressed the author and subproject owners attributions) @timothysc want to sign off? |
/assign @kbarnard10 @timothysc |
/lgtm |
Nice 👌 /lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kbarnard10 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Just merge whenever you're ready. Thanks! |
/lgtm |
* Initial blog post draft for kubeadm v1.15 * More edits, and add pictures * Incorporate changes from review and add hugo metadata Signed-off-by: Jorge O. Castro <jorgec@vmware.com> * Update logo and some final edits Signed-off-by: Jorge O. Castro <jorgec@vmware.com> * Some minor fixes, fix the logo and author attribution * Fix list formatting * Fix an other list formatting issue * Update 2019-06-24-kubeadm-ha-v115.md
cc @kubernetes/sig-cluster-lifecycle @castrojo
FYI, I have checked "Allow edits from maintainers". Would appreciate grammar corrections, formatting fixes etc.
Thanks!