-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow Kubernetes downgrade when restoring etcd snapshot #22232
Comments
Did run into this today. Wanted to test an upgrade. Upgrade failed due to some bug with kube-dns. This did not work properly. For the first few seconds Also |
yes - this would be a great enhancement ! |
Distilling this down a bit:
Since etcd snapshot will have the Kubernetes version data tied to it going forward, users should be able to see which version of k8s the backup was taken at. |
Tested with master-head branch. Local backup Regression tests were also performed. |
I tested the feature in 2.4 master-head with S3 backup enabled. Covering the same test combination as commented by @soumyalj #22232 (comment) Including same regression tests. |
Rancher version 2.4 commit id: 78ee11a (
Found issue #25744 while creating a cluster with Standard user, that will get tracked separately. cc @soumyalj |
Tested with 2.4 master-head(2a7415a2a190) |
What kind of request is this (question/bug/enhancement/feature request):
Feature request
Description
The Kubernetes upgrade documentation listed at https://rancher.com/docs/rancher/v2.x/en/cluster-admin/editing-clusters/#upgrading-kubernetes , it is recommended backing up (taking an etcd snapshot) before doing an upgrade. This implies that if you want to revert an upgrade, you should be able to restore the etcd snapshot and that will put back your Kubernetes cluster to the same version before the upgrade. However, if you attempt to restore the etcd cluster, it will not revert the upgrade, but only restore the state of the etcd data and Rancher will still attempt to upgrade the Kubernetes cluster.
User should be able to revert an upgrade and return their cluster to the version of Kubernetes that corresponds to when the etcd snapshot was taken. For example, a user is on Kubernetes 1.14.0, takes an etcd snapshot, then upgrades to 1.14.5. The user should be able to restore the etcd snapshot and downgrade Kubernetes to v1.14.0. This would involve reverting all Kubernetes components - kube-apiserver, kube-controller, kube-proxy, kube-scheduler, and kubelet, to the previous version.
The text was updated successfully, but these errors were encountered: