Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm - Ha upgrade updates #10340

Merged
merged 4 commits into from
Sep 24, 2018
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
178 changes: 132 additions & 46 deletions content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha.md
Original file line number Diff line number Diff line change
@@ -1,111 +1,199 @@
---
reviewers:
- jamiehannaford
- jamiehannaford
- luxas
- timothysc
- timothysc
- jbeda
title: Upgrading kubeadm HA clusters from 1.9.x to 1.9.y
title: Upgrading kubeadm HA clusters from v1.11 to v1.12
content_template: templates/task
---

{{% capture overview %}}

This guide is for upgrading `kubeadm` HA clusters from version 1.9.x to 1.9.y where `y > x`. The term "`kubeadm` HA clusters" refers to clusters of more than one master node created with `kubeadm`. To set up an HA cluster for Kubernetes version 1.9.x `kubeadm` requires additional manual steps. See [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/) for instructions on how to do this. The upgrade procedure described here targets clusters created following those very instructions. See [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) for more instructions on how to create an HA cluster with `kubeadm`.
This guide is for upgrading `kubeadm` HA clusters from version 1.11 to 1.12. The term "`kubeadm` HA clusters" refers to clusters of more than one control plane node created with `kubeadm`. To set up an HA cluster for Kubernetes version 1.11 `kubeadm` requires additional manual steps. See [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/) for instructions on how to do this. The upgrade procedure described here targets clusters created following those very instructions.

{{% /capture %}}

{{% capture prerequisites %}}

Before proceeding:

- You need to have a functional `kubeadm` HA cluster running version 1.9.0 or higher in order to use the process described here.
- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) carefully.
- You need to have a functional `kubeadm` HA cluster running version 1.11 or higher in order to use the process described here.
- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) carefully.
- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up anything important to you. For example, any application-level state, such as a database and application might depend on (like MySQL or MongoDB) should be backed up beforehand.
- Read [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) to learn about the relevant prerequisites.
- Read [Upgrading/downgrading kubeadm clusters between v1.11 to v1.12](/docs/tasks/administer-cluster/kubeadm-upgrade-1-12/) to learn about the relevant prerequisites.

{{% /capture %}}

{{% capture steps %}}

## Preparation
## Preparation for both methods

Some preparation is needed prior to starting the upgrade. First download the version of `kubeadm` that matches the version of Kubernetes that you are upgrading to:
{{< note >}}
**Note**: All commands in this guide on any control plane or etcd node should be
run as root.
{{< /note >}}

Some preparation is needed prior to starting the upgrade. First upgrade `kubeadm` to the version that matches the version of Kubernetes that you are upgrading to:

```shell
# Use the latest stable release or manually specify a
# released Kubernetes version
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /tmp/kubeadm
chmod a+rx /tmp/kubeadm
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm && \
apt-mark hold kubeadm
```

Copy this file to `/tmp` on your primary master if necessary. Run this command for checking prerequisites and determining the versions you will receive:
Run this command for checking prerequisites and determining the versions you will receive:

```shell
/tmp/kubeadm upgrade plan
kubeadm upgrade plan
```

If the prerequisites are met you'll get a summary of the software versions kubeadm will upgrade to, like this:

Upgrade to the latest stable version:

COMPONENT CURRENT AVAILABLE
API Server v1.9.0 v1.9.2
Controller Manager v1.9.0 v1.9.2
Scheduler v1.9.0 v1.9.2
Kube Proxy v1.9.0 v1.9.2
Kube DNS 1.14.5 1.14.7
Etcd 3.2.7 3.1.11
API Server v1.11.3 v1.12.0
Controller Manager v1.11.3 v1.12.0
Scheduler v1.11.3 v1.12.0
Kube Proxy v1.11.3 v1.12.0
CoreDNS 1.1.3 1.2.2
Etcd 3.2.18 3.2.24

{{< caution >}}
**Caution:** Currently the only supported configuration for kubeadm HA clusters requires the use of an externally managed etcd cluster. Upgrading etcd is not supported as a part of the upgrade. If necessary you will have to upgrade the etcd cluster according to [etcd's upgrade instructions](/docs/tasks/administer-cluster/configure-upgrade-etcd/), which is beyond the scope of these instructions.
{{< /caution >}}
## Stacked control plane nodes

## Upgrading your control plane
### Upgrading the first control plane node

The following procedure must be applied on a single master node and repeated for each subsequent master node sequentially.
The following procedure must be applied on a single control plane node.

Before initiating the upgrade with `kubeadm` `configmap/kubeadm-config` needs to be modified for the current master host. Replace any hard reference to a master host name with the current master hosts' name:
Before initiating the upgrade with `kubeadm` `configmap/kubeadm-config` needs to be modified for the current control plane node.

```shell
kubectl get configmap -n kube-system kubeadm-config -o yaml >/tmp/kubeadm-config-cm.yaml
sed -i 's/^\([ \t]*nodeName:\).*/\1 <CURRENT-MASTER-NAME>/' /tmp/kubeadm-config-cm.yaml
```

Open the file in an editor and replace the following values:

- api.advertiseAddress
- This should be set to the local node's IP address
- etcd.local.extraArgs.advertise-client-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.initial-advertise-peer-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.listen-client-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.listen-peer-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.initial-cluster
- This should be updated to include the hostname and IP address pairs for each control plane node in the cluster, for example:

"ip-172-31-92-42=https://172.31.92.42:2380,ip-172-31-89-186=https://172.31.89.186:2380,ip-172-31-90-42=https://172.31.90.42:2380"

An additional argument (`initial-cluster-state: existing`) also needs to be added to etcd.local.extraArgs.

```shell
kubectl apply -f /tmp/kubeadm-config-cm.yaml --force
```

Now the upgrade process can start. Use the target version determined in the preparation step and run the following command (press “y” when prompted):

```shell
/tmp/kubeadm upgrade apply v<YOUR-CHOSEN-VERSION-HERE>
kubeadm upgrade apply v<YOUR-CHOSEN-VERSION-HERE>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would 1.12-stable work here?

```

If the operation was successful you’ll get a message like this:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.2". Enjoy!
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy!

To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag.
### Upgrading subsequent control plane nodes

Next, manually upgrade your CNI provider
After upgrading the first control plane node, the `kubeadm-config` config map will be updated from `v1alpha2` version to `v1alpha3`, which requires different modifications than were needed for the first control plane node.

Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see if there are additional upgrade steps necessary.
```shell
kubectl get configmap -n kube-system kubeadm-config -o yaml >/tmp/kubeadm-config-cm.yaml
```

{{< note >}}
**Note:** The `kubeadm upgrade apply` step has been known to fail when run initially on the secondary masters (timed out waiting for the restarted static pods to come up). It should succeed if retried after a minute or two.
{{< /note >}}
Open the file in an editor and replace the following values under ClusterConfiguration:

- etcd.local.extraArgs.advertise-client-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.initial-advertise-peer-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.listen-client-urls
- This should be updated for the local node's IP address
- etcd.local.extraArgs.listen-peer-urls
- This should be updated for the local node's IP address

Modify the ClusterStatus to add an additional mapping for the current host under apiEndpoints.

Add an annotation for the cri-socket to the current node, for example to use docker:

```shell
kubectl annotate node <hostname> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
```

Now the upgrade process can start. Use the target version determined in the preparation step and run the following command (press “y” when prompted):

```shell
kubeadm upgrade apply v<YOUR-CHOSEN-VERSION-HERE>
```

## Upgrade base software packages
If the operation was successful you’ll get a message like this:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.0". Enjoy!

## External etcd

### Upgrade each control plane

Get a copy of the kubeadm config used to create this cluster. The config should be the same for every node. The config must exist on every control plane node before the upgrade begins!

```
# on each control plane node
kubectl get configmap -n kube-system kubeadm-config -o jsonpath={.data.MasterConfiguration} > /tmp/kubeadm-config.yaml
```

Now run the upgrade on each control plane node one at a time.

```
kubeadm upgrade apply v1.12.0 --config /tmp/kubeadm-config.yaml
```

### Upgrade etcd

Kubernetes v1.11 to v1.12 only changed the patch version of etcd from v3.2.18 to v3.2.24. The upgrade will be a rolling upgrade with no downtime since it is ok to run both of these versions in the same cluster.

At this point all the static pod manifests in your cluster, for example API Server, Controller Manager, Scheduler, Kube Proxy have been upgraded, however the base software, for example `kubelet`, `kubectl`, `kubeadm` installed on your nodes’ OS are still of the old version. For upgrading the base software packages we will upgrade them and restart services on all nodes one by one:
On the first host, as root, modify the etcd manifest with this command:

```shell
# use your distro's package manager, e.g. 'yum' on RH-based systems
sed -i 's/3.2.18/3.2.24/' /etc/kubernetes/manifests/etcd.yaml
```

Wait for the etcd process to reconnect. There will be error warnings in the other member's logs. This is expected.

Repeat the process on the other etcd hosts, replacing the version and waiting for the process to come back.

## Post control plane upgrade steps

### Manually upgrade your CNI provider

Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see if there are additional upgrade steps necessary.

### Update kubelet and kubectl packages

At this point all the static pod manifests in your cluster, for example API Server, Controller Manager, Scheduler, Kube Proxy have been upgraded, however the base software, for example `kubelet`, `kubectl` installed on your nodes’ OS are still of the old version. For upgrading the base software packages we will upgrade them and restart services on all nodes one by one:

```shell
# use your distro's package manager, e.g. 'apt-get' on Debian-based systems
# for the versions stick to kubeadm's output (see above)
yum install -y kubelet-<NEW-K8S-VERSION> kubectl-<NEW-K8S-VERSION> kubeadm-<NEW-K8S-VERSION> kubernetes-cni-<NEW-CNI-VERSION>
apt-mark unhold kubelet kubectl && \
apt-get update && \
apt-get install kubelet=<NEW-K8S-VERSION> kubectl=<NEW-K8S-VERSION> && \
apt-mark hold kubelet kubectl && \
systemctl restart kubelet
```

In this example an _rpm_-based system is assumed and `yum` is used for installing the upgraded software. On _deb_-based systems it will be `apt-get update` and then `apt-get install <PACKAGE>=<NEW-K8S-VERSION>` for all packages.
In this example a _deb_-based system is assumed and `apt-get` is used for installing the upgraded software. On _rpm_-based systems it will be `yum install <PACKAGE>=<NEW-K8S-VERSION>` for all packages.

Now the new version of the `kubelet` should be running on the host. Verify this using the following command on the respective host:

Expand All @@ -129,8 +217,6 @@ If the upgrade fails the situation afterwards depends on the phase in which thin

You can run `/tmp/kubeadm upgrade apply` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can use `/tmp/kubeadm upgrade apply` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state.

2. If `/tmp/kubeadm upgrade apply` on one of the secondary masters failed you still have a working, upgraded cluster, but with the secondary masters in a somewhat undefined condition. You will have to find out what went wrong and join the secondaries manually. As mentioned above, sometimes upgrading one of the secondary masters fails waiting for the restarted static pods first, but succeeds when the operation is simply repeated after a little pause of one or two minutes.
2. If `/tmp/kubeadm upgrade apply` on one of the secondary masters failed you still have a working, upgraded cluster, but with the secondary masters in a somewhat undefined condition. You will have to find out what went wrong and join the secondaries manually.

{{% /capture %}}