-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading Kubernetes #75
Comments
Yes. Will release update later today.
|
Will we be able to update an existing cluster? |
@owenmorgan upgrading the k8s version requires deleting the etcd cluster, where all the kubernetes state is stored on ephemeral disk. I have a forked version of tack where etcd state is persisted on an ELB volume, and that works beautifully. Would anybody be interested in a PR to contribute that back to tack (@wellsie)? Let me know and I will clean up and submit. In the meantime, you can use a simple workaround to recover from losing the etcd cluster. Before upgrading, you would run this code snippet. This will allow you to recover all cluster state (including PV's). ELB's will be regenerated, so update any DNS records accordingly. |
@owenmorgan looks like it was patched in 8f2a62e |
Oh one other thing you'll need to do when you upgrade is taint or manually update the S3 bucket, so that the files in manifests/etc.tar point to the version of k8s you want to use. Otherwise the update won't actually take. |
is the backup / restore still necessary @adambom ? |
i recommend upgrading the cluster manually. i will write up the procedure later this week - in the meantime here is the basic process: update kubelet.service on worker nodes
update kubelet.service on etcd/apiserver nodesrepeat the above procedure for the master (etcd,apiserver) nodes. update version in kubernetes manifests on etcd/apiserver nodes
i'm looking into ways to automate this. it hasn't been a priority since the procedure is fairly straight forward. note that running pods should continue to run during this procedure. |
would this procedure work: https://github.com/coreos/coreos-baremetal/blob/master/Documentation/bootkube-upgrades.md ? |
If you ever lose your etcd cluster for whatever reason, or if you should ever need to restart it, you should be able to recover your state. Mentioned in this issue: kz8s#75
@wellsie any update on the kubernetes automated update? it is fine to do those ^^^ commands manually if you have a small cluster, but with the big would be a headache :) |
ok, have checked out to update |
@rimusz, It is because tack use user-data, that run every time that the machine power up. I replace user-data with cloud init in my environment, if everything work fine i will submit a PR. |
You can use this procedure Update worker nodes
Update master nodes
@wellsie, please validate this. |
@yagonobre thanks for your solution. it looks good, but has way to many manual fiddling, specially with the |
Why is it not possible to replace etcd node and let it re-sync with the cluster? |
@rokka-n I do it |
Are you open to incorporating automated Kubernetes upgrades? If not is the purpose of this project a one-time setup and then you don't need this project anymore? |
Yes open to automated upgrades 👍
…On Fri, Feb 3, 2017 at 6:45 AM Phred ***@***.***> wrote:
Are you open to incorporating automated Kubernetes upgrades? If not is the
purpose of this project a one-time setup and then you don't need this
project anymore?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#75 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AE6-kcseiksMO69J11ltoL7iVgtyYuPjks5rYz2PgaJpZM4KKAq4>
.
|
Excellent! However without automated upgrades, is this intended to be a single-use project? |
Do you have plans on using k8s 1.4.0? If not, how can I upgrade my version?
The text was updated successfully, but these errors were encountered: