Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster #88434

Merged
merged 1 commit into from Feb 24, 2020

Conversation

SataQiu
Copy link
Member

@SataQiu SataQiu commented Feb 23, 2020

What type of PR is this?
/kind bug

What this PR does / why we need it:
kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster

Which issue(s) this PR fixes:

Fixes kubernetes/kubeadm#2035

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

NONE

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Feb 23, 2020
@k8s-ci-robot k8s-ci-robot added area/kubeadm sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 23, 2020
@SataQiu
Copy link
Member Author

SataQiu commented Feb 23, 2020

/test pull-kubernetes-e2e-gce-100-performance

Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/assign @rajansandeep

// If we're dry-running, we don't need to wait for the new DNS addon to become ready
if !dryRun {
nodes, err := client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{
FieldSelector: fields.Set{"spec.unschedulable": "false"}.AsSelector().String(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i do not recall observing nodes being unschedulable while the CoreDNS addon is being upgraded during "kubeadm upgrade". is this something you have seen @SataQiu ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to this guide https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/#upgrade-the-first-control-plane-node, kubectl drain <cp-node-name> --ignore-daemonsets in step 2 will make the control plane node unschedulable.
If we are using a single node cluster, the only node will be marked as unschedulable. In this case, new DNS deployment will never be ready. That's why the program is stuck.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is true.

Copy link
Contributor

@chrisohaver chrisohaver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@rajansandeep rajansandeep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 24, 2020
@rajansandeep
Copy link
Contributor

Is it worth having a single node cluster upgrade test in https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-kubeadm to catch cases like these? Currently, all the tests are multi-node ones.

@neolit123
Copy link
Member

Is it worth having a single node cluster upgrade test in https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-kubeadm to catch cases like these? Currently, all the tests are multi-node ones.

given the minimal bandwidth that we have to monitor and update our e2e tests, i'd argue that the maintenance burden will not be justified for having single-CP upgrade tests for all branches.
possibly having only one for the master branch is manageable as a start.

but something to note here is that our current e2e tests does not drain/cordon at all:
https://github.com/kubernetes/kubeadm/blob/adeeff900fc6024f36817468bcf43a455e18e2e2/kinder/pkg/cluster/manager/actions/kubeadm-upgrade.go#L33

so possibly this is something that can be done first.

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: neolit123, SataQiu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 24, 2020
@k8s-ci-robot k8s-ci-robot merged commit b68f869 into kubernetes:master Feb 24, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.18 milestone Feb 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubeadm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
5 participants