New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] kubernetes should have the ablility upgrade safe. #53773

Closed
netroby opened this Issue Oct 12, 2017 · 5 comments

Comments

Projects
None yet
5 participants
@netroby
Contributor

netroby commented Oct 12, 2017

I installed kubernetes 1.7.3 last month, works well. then the kubernetes 1.8 released. I ran yum update(My system is CentOS x64). the kubernetes cluster failed to upgrade. after kube-* package upgrade.
The cluster network not working. we can not access network (both in/out)
And failed to find a solution.
We setup our cluster by kubeadm.
We try to delete all node, and recreate kubernetes cluster . but still failed. the network not working.

There hardly to find out how to upgrade kubernetes from old version to latest version.
No enough test ,not enough document/guide. If we occures problems. no one knows how to resolve it.
The system not clear to show us why the upgrade success or failed.

Kubernetes release quickly, but not test well. why not make more test for upgrade ?

@dixudx

This comment has been minimized.

Show comment
Hide comment
@dixudx

dixudx Oct 12, 2017

Member

@netroby Here is a doc on how to upgrade kubeadm clusters from 1.7 to 1.8.

If you're using other methods to provision your cluster, like what you mentioned above, a simpler and safe way is to directly modify related kube unit-files and binaries, instead of upgrading from the scratch.

Member

dixudx commented Oct 12, 2017

@netroby Here is a doc on how to upgrade kubeadm clusters from 1.7 to 1.8.

If you're using other methods to provision your cluster, like what you mentioned above, a simpler and safe way is to directly modify related kube unit-files and binaries, instead of upgrading from the scratch.

@dixudx

This comment has been minimized.

Show comment
Hide comment
@dixudx

dixudx Oct 12, 2017

Member

/sig cluster-ops

Member

dixudx commented Oct 12, 2017

/sig cluster-ops

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot commented Jan 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Feb 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Mar 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment