From 0d8929da3d41ece55aa0a14860a36e5095386b08 Mon Sep 17 00:00:00 2001 From: Jason DeTiberus Date: Mon, 24 Sep 2018 17:17:39 -0400 Subject: [PATCH] kubeadm - Ha upgrade updates (#10340) * Update HA upgrade docs * Adds external etcd HA upgrade guide Signed-off-by: Chuck Ha * copyedit * more edits --- .../kubeadm/kubeadm-upgrade-ha.md | 220 +++++++++++++----- 1 file changed, 162 insertions(+), 58 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha.md index dfa8c127d9a4a..064dae86d3bc8 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha.md @@ -1,16 +1,16 @@ --- reviewers: -- jamiehannaford +- jamiehannaford - luxas -- timothysc +- timothysc - jbeda -title: Upgrading kubeadm HA clusters from 1.9.x to 1.9.y +title: Upgrading kubeadm HA clusters from v1.11 to v1.12 content_template: templates/task --- {{% capture overview %}} -This guide is for upgrading `kubeadm` HA clusters from version 1.9.x to 1.9.y where `y > x`. The term "`kubeadm` HA clusters" refers to clusters of more than one master node created with `kubeadm`. To set up an HA cluster for Kubernetes version 1.9.x `kubeadm` requires additional manual steps. See [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/) for instructions on how to do this. The upgrade procedure described here targets clusters created following those very instructions. See [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) for more instructions on how to create an HA cluster with `kubeadm`. +This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.11.x to version 1.12.x. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/). {{% /capture %}} @@ -18,119 +18,223 @@ This guide is for upgrading `kubeadm` HA clusters from version 1.9.x to 1.9.y wh Before proceeding: -- You need to have a functional `kubeadm` HA cluster running version 1.9.0 or higher in order to use the process described here. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) carefully. -- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up anything important to you. For example, any application-level state, such as a database and application might depend on (like MySQL or MongoDB) should be backed up beforehand. -- Read [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) to learn about the relevant prerequisites. +- You need to have a `kubeadm` HA cluster running version 1.11 or higher. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) carefully. +- Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. +- Check the prerequisites for [Upgrading/downgrading kubeadm clusters between v1.11 to v1.12](/docs/tasks/administer-cluster/kubeadm-upgrade-1-12/). + +{{< note >}} +**Note**: All commands on any control plane or etcd node should be +run as root. +{{< /note >}} {{% /capture %}} {{% capture steps %}} -## Preparation +## Prepare for both methods -Some preparation is needed prior to starting the upgrade. First download the version of `kubeadm` that matches the version of Kubernetes that you are upgrading to: +Upgrade `kubeadm` to the version that matches the version of Kubernetes that you are upgrading to: ```shell -# Use the latest stable release or manually specify a -# released Kubernetes version -export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) -export ARCH=amd64 # or: arm, arm64, ppc64le, s390x -curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /tmp/kubeadm -chmod a+rx /tmp/kubeadm +apt-mark unhold kubeadm && \ +apt-get update && apt-get install -y kubeadm && \ +apt-mark hold kubeadm ``` -Copy this file to `/tmp` on your primary master if necessary. Run this command for checking prerequisites and determining the versions you will receive: +Check prerequisites and determine the upgrade versions: ```shell -/tmp/kubeadm upgrade plan +kubeadm upgrade plan ``` -If the prerequisites are met you'll get a summary of the software versions kubeadm will upgrade to, like this: +You should see something like the following: Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE - API Server v1.9.0 v1.9.2 - Controller Manager v1.9.0 v1.9.2 - Scheduler v1.9.0 v1.9.2 - Kube Proxy v1.9.0 v1.9.2 - Kube DNS 1.14.5 1.14.7 - Etcd 3.2.7 3.1.11 + API Server v1.11.3 v1.12.0 + Controller Manager v1.11.3 v1.12.0 + Scheduler v1.11.3 v1.12.0 + Kube Proxy v1.11.3 v1.12.0 + CoreDNS 1.1.3 1.2.2 + Etcd 3.2.18 3.2.24 + +## Stacked control plane nodes + +### Upgrade the first control plane node + +Modify `configmap/kubeadm-config` for this control plane node: + +```shell +kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml +``` + +Open the file in an editor and replace the following values: + +- `api.advertiseAddress` + + This should be set to the local node's IP address. + +- `etcd.local.extraArgs.advertise-client-urls` + + This should be updated to the local node's IP address. + +- `etcd.local.extraArgs.initial-advertise-peer-urls` + + This should be updated to the local node's IP address. + +- `etcd.local.extraArgs.listen-client-urls` -{{< caution >}} -**Caution:** Currently the only supported configuration for kubeadm HA clusters requires the use of an externally managed etcd cluster. Upgrading etcd is not supported as a part of the upgrade. If necessary you will have to upgrade the etcd cluster according to [etcd's upgrade instructions](/docs/tasks/administer-cluster/configure-upgrade-etcd/), which is beyond the scope of these instructions. -{{< /caution >}} + This should be updated to the local node's IP address. -## Upgrading your control plane +- `etcd.local.extraArgs.listen-peer-urls` -The following procedure must be applied on a single master node and repeated for each subsequent master node sequentially. + This should be updated to the local node's IP address. -Before initiating the upgrade with `kubeadm` `configmap/kubeadm-config` needs to be modified for the current master host. Replace any hard reference to a master host name with the current master hosts' name: +- `etcd.local.extraArgs.initial-cluster` + + This should be updated to include the hostname and IP address pairs for each control plane node in the cluster. For example: + + "ip-172-31-92-42=https://172.31.92.42:2380,ip-172-31-89-186=https://172.31.89.186:2380,ip-172-31-90-42=https://172.31.90.42:2380" + +You must also pass an additional argument (`initial-cluster-state: existing`) to etcd.local.extraArgs. ```shell -kubectl get configmap -n kube-system kubeadm-config -o yaml >/tmp/kubeadm-config-cm.yaml -sed -i 's/^\([ \t]*nodeName:\).*/\1 /' /tmp/kubeadm-config-cm.yaml -kubectl apply -f /tmp/kubeadm-config-cm.yaml --force +kubectl apply -f kubeadm-config-cm.yaml --force ``` -Now the upgrade process can start. Use the target version determined in the preparation step and run the following command (press “y” when prompted): +Start the upgrade: ```shell -/tmp/kubeadm upgrade apply v +kubeadm upgrade apply v ``` -If the operation was successful you’ll get a message like this: +You should see something like the following: - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.2". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy! -To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. +The `kubeadm-config` ConfigMap is now updated from `v1alpha2` version to `v1alpha3`. -Next, manually upgrade your CNI provider +### Upgrading additional control plane nodes -Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see if there are additional upgrade steps necessary. +Each additional control plane node requires modifications that are different from the first control plane node. Run: -{{< note >}} -**Note:** The `kubeadm upgrade apply` step has been known to fail when run initially on the secondary masters (timed out waiting for the restarted static pods to come up). It should succeed if retried after a minute or two. -{{< /note >}} +```shell +kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml +``` + +Open the file in an editor and replace the following values for `ClusterConfiguration`: + +- `etcd.local.extraArgs.advertise-client-urls` -## Upgrade base software packages + This should be updated to the local node's IP address. -At this point all the static pod manifests in your cluster, for example API Server, Controller Manager, Scheduler, Kube Proxy have been upgraded, however the base software, for example `kubelet`, `kubectl`, `kubeadm` installed on your nodes’ OS are still of the old version. For upgrading the base software packages we will upgrade them and restart services on all nodes one by one: +- `etcd.local.extraArgs.initial-advertise-peer-urls` + + This should be updated to the local node's IP address. + +- `etcd.local.extraArgs.listen-client-urls` + + This should be updated to the local node's IP address. + +- `etcd.local.extraArgs.listen-peer-urls` + + This should be updated to the local node's IP address. + +You must also modify the `ClusterStatus` to add a mapping for the current host under apiEndpoints. + +Add an annotation for the cri-socket to the current node, for example to use docker: ```shell -# use your distro's package manager, e.g. 'yum' on RH-based systems +kubectl annotate node kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock +``` + +Start the upgrade: + +```shell +kubeadm upgrade apply v +``` + +You should see something like the following: + + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy! + +## External etcd + +### Upgrade each control plane + +Get a copy of the kubeadm config used to create this cluster. The config should be the same for every node. The config must exist on every control plane node before the upgrade begins. + +``` +# on each control plane node +kubectl get configmap -n kube-system kubeadm-config -o jsonpath={.data.MasterConfiguration} > kubeadm-config.yaml +``` + +Now run the upgrade on each control plane node one at a time. + +``` +kubeadm upgrade apply v1.12.0 --config kubeadm-config.yaml +``` + +### Upgrade etcd + +Kubernetes v1.11 to v1.12 only changed the patch version of etcd from v3.2.18 to v3.2.24. This is a rolling upgrade with no downtime, because you can run both versions in the same cluster. + +On the first host, modify the etcd manifest: + +```shell +sed -i 's/3.2.18/3.2.24/' /etc/kubernetes/manifests/etcd.yaml +``` + +Wait for the etcd process to reconnect. There will be error warnings in the other etcd node logs. This is expected. + +Repeat this step on the other etcd hosts. + +## Next steps + +### Manually upgrade your CNI provider + +Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see whether you need to take additional upgrade steps. + +### Update kubelet and kubectl packages + +Upgrade the kubelet and kubectl by running the following on each node: + +```shell +# use your distro's package manager, e.g. 'apt-get' on Debian-based systems # for the versions stick to kubeadm's output (see above) -yum install -y kubelet- kubectl- kubeadm- kubernetes-cni- +apt-mark unhold kubelet kubectl && \ +apt-get update && \ +apt-get install kubelet= kubectl= && \ +apt-mark hold kubelet kubectl && \ systemctl restart kubelet ``` -In this example an _rpm_-based system is assumed and `yum` is used for installing the upgraded software. On _deb_-based systems it will be `apt-get update` and then `apt-get install =` for all packages. +In this example a _deb_-based system is assumed and `apt-get` is used for installing the upgraded software. On rpm-based systems the command is `yum install =` for all packages. -Now the new version of the `kubelet` should be running on the host. Verify this using the following command on the respective host: +Verify that the new version of the kubelet is running: ```shell systemctl status kubelet ``` -Verify that the upgraded node is available again by executing the following from wherever you run `kubectl` commands: +Verify that the upgraded node is available again by running the following command from wherever you run `kubectl`: ```shell kubectl get nodes ``` -If the `STATUS` column of the above command shows `Ready` for the upgraded host, you can continue (you may have to repeat this for a couple of time before the node gets `Ready`). +If the `STATUS` column shows `Ready` for the upgraded host, you can continue. You might need to repeat the command until the node shows `Ready`. ## If something goes wrong -If the upgrade fails the situation afterwards depends on the phase in which things went wrong: +If the upgrade fails, see whether one of the following scenarios applies: -1. If `/tmp/kubeadm upgrade apply` failed to upgrade the cluster it will try to perform a rollback. Hence if that happened on the first master, chances are pretty good that the cluster is still intact. +- If `kubeadm upgrade apply` failed to upgrade the cluster, it will try to perform a rollback. If this is the case on the first master, the cluster is probably still intact. - You can run `/tmp/kubeadm upgrade apply` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can use `/tmp/kubeadm upgrade apply` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. + You can run `kubeadm upgrade apply` again, because it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can run `kubeadm upgrade apply` to change a running cluster with `x.x.x --> x.x.x` with `--force` to recover from a bad state. -2. If `/tmp/kubeadm upgrade apply` on one of the secondary masters failed you still have a working, upgraded cluster, but with the secondary masters in a somewhat undefined condition. You will have to find out what went wrong and join the secondaries manually. As mentioned above, sometimes upgrading one of the secondary masters fails waiting for the restarted static pods first, but succeeds when the operation is simply repeated after a little pause of one or two minutes. +- If `kubeadm upgrade apply` on one of the secondary masters failed, the cluster is upgraded and working, but the secondary masters are in an undefined state. You need to investigate further and join the secondaries manually. {{% /capture %}} - -