Skip to content

Commit

Permalink
add output log
Browse files Browse the repository at this point in the history
  • Loading branch information
yuxiaobo96 committed Feb 10, 2020
1 parent 122be5f commit a34d9b0
Showing 1 changed file with 145 additions and 17 deletions.
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@


## kubeadm跨版本升级集群

在进行升级之前需要了解各版本之间的关系:
Expand Down Expand Up @@ -190,12 +192,103 @@ _____________________________________________________________________

```
7. 升级控制平面各组件,包括etcd(此处要拉取各组件的镜像及证书的更新,需要科学上网)
7. 升级控制平面各组件,包括etcd(此处要拉取各组件的镜像及相关证书的更新,需要科学上网)
```shell
$ sudo kubeadm upgrade apply v1.17.2
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.2"
[upgrade/versions] Cluster version: v1.16.3
[upgrade/versions] kubeadm version: v1.17.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.2"...
Static pod: kube-apiserver-yuxiaobo-master hash: 0dd2684decdad0c71291df8a0eab9d9f
Static pod: kube-controller-manager-yuxiaobo-master hash: ddf4d7dd458032b91c6abc5f65ef3eb3
Static pod: kube-scheduler-yuxiaobo-master hash: 4e1bd6e5b41d60d131353157588ab020
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-yuxiaobo-master hash: c1a31e9fc74a43fc862192d921baf512
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-02-10-10-48-49/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-yuxiaobo-master hash: c1a31e9fc74a43fc862192d921baf512
Static pod: etcd-yuxiaobo-master hash: d5a03d2c703bca8f245eae11740b6419
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests049444808"
W0210 10:49:13.185153 23047 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-02-10-10-48-49/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-yuxiaobo-master hash: 0dd2684decdad0c71291df8a0eab9d9f
Static pod: kube-apiserver-yuxiaobo-master hash: 2e3508621802e44482ef38c60d2f1101
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-02-10-10-48-49/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-yuxiaobo-master hash: ddf4d7dd458032b91c6abc5f65ef3eb3
Static pod: kube-controller-manager-yuxiaobo-master hash: 104ca85f84ee2ddc289732d63c86740a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-02-10-10-48-49/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-yuxiaobo-master hash: 4e1bd6e5b41d60d131353157588ab020
Static pod: kube-scheduler-yuxiaobo-master hash: 9c994ea62a2d8d6f1bb7498f10aa6fcf
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
```
根据上面输出的最后两行,升级集群到v1.17.2成功
`kubeadm upgrade apply`执行了如下操作:
- 检测集群是否可以升级:
Expand All @@ -212,13 +305,38 @@ $ sudo kubeadm upgrade apply v1.17.2
```shell
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
```
此处可以发现服务端的版本已经升级为v1.17.2。
查看控制平面节点上的组件状态:
```shell
$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
```
9. 升级CNI提供的程序插件(官网)
9. 减产CNI提供的程序插件是否需要升级
您的容器网络接口(CNI)提供程序可能有其自己的升级说明。检查[插件](https://kubernetes.io/docs/concepts/cluster-administration/addons/)页面以找到您的CNI提供程序,并查看是否需要其他升级步骤。
示例中使用的网络插件是[calico](https://docs.projectcalico.org/introduction/)。kubec
如果CNI提供程序作为DaemonSet运行,则在其他控制平面节点上不需要此步骤。
```shell
$ kubectl get pods -n kube-system
...
$ kubectl describe pod calico-node-jcbfl -n kube-system
...
Image: calico/cni:v3.10.1
...
```
当前使用的calico镜像是v3.10.1版本的,最新版本是v3.12.0,有需要可根据[官网](https://docs.projectcalico.org/release-notes/)进行升级。
不同的容器网络接口(CNI)提供程序的升级说明也不同。检查[插件](https://kubernetes.io/docs/concepts/cluster-administration/addons/)页面以找到您的CNI提供程序,并查看是否需要其他升级步骤。如果CNI提供程序作为DaemonSet运行,则在其他控制平面节点上不需要此步骤。
10. Uncordon 控制平面节点,恢复控制平面节点
Expand All @@ -227,7 +345,7 @@ $ kubectl version
$ kubectl uncordon <cp-node-name>
```
11. 升级该控制平面上的kubelet和kubectl
11. 升级该控制平面上的kubelet和kubectl(root权限)
```shell
# replace x in 1.17.x-00 with the latest patch version
Expand All @@ -242,31 +360,32 @@ apt-mark hold kubelet kubectl
$ sudo systemctl restart kubelet
```
13. 再次查看kubectl版本,是否与预期一致
13. 查看控制平面节点(master)是否升级成功
```shell
$ kubectl version
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 67d v1.16.3
yuxiaobo-master Ready master 68d v1.17.2
```
到此为止,主控制平面节点的升级已完成
### 升级其他控制平面节点
1. 与在主控制平面节点的操作相同,但使用命令
1. 与在主控制平面节点的操作相同,但使用以下命令进行升级
```shell
$ sudo kubeadm upgrade node experimental-control-plane
```
而不是使用`sudo kubeadm upgrade apply`,并且`sudo kubeadm upgrade plan`也不需要执行。
`kubeadm upgrade node`在其他控制平面节点上做以下工作:
- 从集群中获取kubeadm 的`ClusterConfiguration`
- 备份kube-apiserver证书(可选)。
- 升级控制平面核心组件的静态Pod清单文件
2. 在其他的控制平面节点上也需要升级kubelet和kubectl
2. 在其他的控制平面节点上也需要升级kubelet和kubectl(root权限)
```shell
# replace x in 1.17.x-00 with the latest patch version
Expand All @@ -285,7 +404,7 @@ $ sudo systemctl restart kubelet
在不牺牲运行工作负载所需的最小容量的前提下,工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点。
1. 在所有工作节点上升级kubeadm
1. 在所有工作节点上升级kubeadm(root权限)
```shell
# replace x in 1.17.x-00 with the latest patch version
Expand All @@ -297,22 +416,28 @@ apt-mark hold kubeadm
2. 将工作节点标记为不可用,使其处于维护状态
```shell
# <node-to-drain> 代表的是drain的工作节点
$ kubectl drain <node-to-drain> --ignore-daemonsets
# <node-to-drain> 代表的是drain的工作节点,此命令需要在master节点上执行
$ kubectl drain <node-to-drain> --ignore-daemonsetskube
```
3. 升级kubelet配置文件
```shell
# 此命令需要在master节点上执行
$ sudo kubeadm upgrade node config --kubelet-version v1.17.2
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
...
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
```
`kubeadm upgrade node`在工作节点中执行以下操作
- 从集群中获取kubeadm的`ClusterConfiguration`
- 升级工作节点的kubelet配置
4. 在所有工作节点上升级kubelet和kubectl
4. 在所有工作节点上升级kubelet和kubectl(root权限)
```shell
# replace x in 1.17.x-00 with the latest patch version
Expand All @@ -330,7 +455,7 @@ $ sudo systemctl restart kubelet
6. 升级后,将此工作节点标记为可调度来使其重新加入集群
```shell
# <node-to-drain> 代表的是工作节点的名称
# <node-to-drain> 代表的是工作节点的名称,此命令需要在master节点上执行
$ kubectl uncordon <node-to-drain>
```
Expand All @@ -340,6 +465,9 @@ $ kubectl uncordon <node-to-drain>
```shell
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 67d v1.17.2
yuxiaobo-master Ready master 68d v1.17.2
```
若所有节点均显示Ready,并且版本号已升级为v1.17.2版本,即整个集群升级完毕。
Expand All @@ -366,4 +494,4 @@ $ kubectl get nodes
2. 在进行版本升级时,需要指定个各组件要升级的版本,使各个组件的版本保持一致性。
在低版本之间升级时(从v1.13.x升级到v1.14.x)ubuntu系统有可能会对组件进行自动升级,直接升级到当前的最新版本(示例是从v1.16.3升级到最新版本v1.17.2,不受影响)。如果发生这种情况,导致kubeadm和kubelet的版本不一致,可以先删除当前的kubeadm和kubelet,再次安装将要升级的对应版本
在低版本之间升级时(从v1.13.x升级到v1.14.x)ubuntu系统有可能会对组件进行自动升级,直接升级到当前的最新版本(示例是从v1.16.3升级到最新版本v1.17.2,不受影响)。如果发生这种情况,导致kubeadm和kubelet的版本不一致,可以先删除当前的kubeadm和kubelet,再次安装将要升级的组件对应版本,然后继续后面的升级步骤

0 comments on commit a34d9b0

Please sign in to comment.