Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance ErrReasonPVNotExist in volumebinding scheduler plugin #105196

Merged

Conversation

@yibozhuang
Copy link

@yibozhuang yibozhuang commented Sep 22, 2021

What type of PR is this?

/kind cleanup

What this PR does / why we need it:

This change will make the message more clear when there
is a case of PVC(s) bound to PV(s) that no longer exists
and scheduler does not select the node due to this issue.

Without this change, the FailedScheduling error message would look like:

0/2 nodes are available: 2 pvc(s) bound to non-existent pv(s)

With this change, the message would look like:

0/2 nodes are available: 2 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s)

For larger clusters with many different reasons of nodes that
are not available, the current message can be very misleading for
users to think that there are many PVCs lost due to PVs deleted but
in fact it could be just a single PVC case but many nodes not selected
by the scheduler due to this case.

Signed-off By: Yibo Zhuang yibzhuang@gmail.com

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Enhanced error message for nodes not selected by scheduler due to pod's PersistentVolumeClaim(s) bound to PersistentVolume(s) that do not exist.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 22, 2021

@yibozhuang: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Loading

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 22, 2021

Welcome @yibozhuang!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

Loading

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 22, 2021

Hi @yibozhuang. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Loading

@ddebroy
Copy link
Member

@ddebroy ddebroy commented Sep 22, 2021

/ok-to-test

Loading

@ddebroy
Copy link
Member

@ddebroy ddebroy commented Sep 23, 2021

/assign @saad-ali

Loading

@ddebroy
Copy link
Member

@ddebroy ddebroy commented Sep 23, 2021

The fix to correctly attribute the count to nodes rather than PVCs looks good.

/lgtm

Loading

// ErrReasonPVNotExist is used when a PVC can't find the bound persistent volumes"
ErrReasonPVNotExist = "pvc(s) bound to non-existent pv(s)"
// ErrReasonPVNotExist is used when a pod has PVC(s) but can't find the bound persistent volume(s)"
ErrReasonPVNotExist = "node(s) due to pvc(s) bound to non-existent pv(s)"
Copy link
Member

@ahg-g ahg-g Sep 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this a pod level error, not a node level one? like all nodes would not be eligible if the pod's PVC was bound to non-existent PV.

Loading

Copy link
Author

@yibozhuang yibozhuang Sep 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right but FindPodVolumes is invoked for all nodes and the number of not available ones would be accumulated because every node would not be eligible since the PV is not found.

This change just cleans up the message to indicate that the node is not selected due to this reason.
The previous message of something like:

0/2 nodes are available: 2 pvc(s) bound to non-existent pv(s)

can be confusing since its not 2 PVCs bound to non-existent PVs, but rather 2 nodes not available to be scheduled due to pod's PVC(s) bound to non-existent PV(s).

Loading

Copy link
Member

@saad-ali saad-ali Sep 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about?

Suggested change
ErrReasonPVNotExist = "node(s) due to pvc(s) bound to non-existent pv(s)"
ErrReasonPVNotExist = "node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s)"

Loading

Copy link
Author

@yibozhuang yibozhuang Sep 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, updated

Loading

// ErrReasonPVNotExist is used when a PVC can't find the bound persistent volumes"
ErrReasonPVNotExist = "pvc(s) bound to non-existent pv(s)"
// ErrReasonPVNotExist is used when a pod has PVC(s) but can't find the bound persistent volume(s)"
ErrReasonPVNotExist = "node(s) due to pvc(s) bound to non-existent pv(s)"
Copy link
Member

@saad-ali saad-ali Sep 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about?

Suggested change
ErrReasonPVNotExist = "node(s) due to pvc(s) bound to non-existent pv(s)"
ErrReasonPVNotExist = "node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s)"

Loading

@yibozhuang yibozhuang force-pushed the improve-scheduler-pv-not-exist-err branch from ecdf96b to eb5bbdd Sep 27, 2021
@k8s-ci-robot k8s-ci-robot removed the lgtm label Sep 27, 2021
This change will make the message more clear when there
is a case of PVC(s) bound to PV(s) that no longer exists
and scheduler does not select the node due to this issue.

Previous error message would look like:
0/2 nodes are available: 2 pvc(s) bound to non-existent pv(s)

Updated message looks like:
0/2 nodes are available: 2 node(s) unavailable due to one or more
pvc(s) bound to non-existent pv(s)

For larger clusters with many different reasons of nodes that
are not available, the current message can be very misleading for
users to think that there are many PVCs lost due to PVs deleted but
in fact it could be just a single PVC case but many nodes not selected
by the scheduler due to this case.

Signed-off By: Yibo Zhuang <yibzhuang@gmail.com>
@yibozhuang yibozhuang force-pushed the improve-scheduler-pv-not-exist-err branch from eb5bbdd to 603a4e1 Sep 27, 2021
Copy link
Member

@saad-ali saad-ali left a comment

Thanks

/lgtm
/approve

Loading

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 27, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: saad-ali, yibozhuang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Loading

@yibozhuang
Copy link
Author

@yibozhuang yibozhuang commented Sep 28, 2021

/retest

Loading

@k8s-ci-robot k8s-ci-robot merged commit 16fdb2f into kubernetes:master Sep 28, 2021
14 checks passed
Loading
@k8s-ci-robot k8s-ci-robot added this to the v1.23 milestone Sep 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment