Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: support reschedule disk with resource exhausted error in del… #890

Conversation

mowangdk
Copy link
Contributor

@mowangdk mowangdk commented Oct 17, 2023

What type of PR is this?

/kind feature

What this PR does / why we need it:

Support reschedule disk with resource exhausted error in delay provisioning

Which issue(s) this PR fixes:

None

Special notes for your reviewer:

None

Does this PR introduce a user-facing change?

None

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

None

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 17, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mowangdk

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Oct 17, 2023
@mowangdk mowangdk force-pushed the feature/support_reschedule_pvc_prebind branch from 6e3a796 to ee8b39d Compare October 17, 2023 13:14
Copy link
Contributor

@huww98 huww98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, I have a private branch that uses topology to constraint disk type, which is waiting for the dependencies to be merged. By that way, we can avoid query for "node-selected" from APIServer.

Once I open that as a PR, it will overwrite the changes here.

@@ -969,7 +969,7 @@ func getDiskType(diskVol *diskVolumeArgs) ([]string, []string, error) {
nodeInfo, err := client.CoreV1().Nodes().Get(context.Background(), diskVol.NodeSelected, metav1.GetOptions{})
if err != nil {
log.Log.Infof("getDiskType: failed to get node labels: %v", err)
goto cusDiskType
return nil, nil, status.Errorf(codes.ResourceExhausted, "CreateVolume:: get node info by name: %s failed with err: %v, start to reschedule", diskVol.NodeSelected, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about just return ResourceExhausted if we got a NotFound err. And return Internal error for other err. To avoid possible infinite rescheduling loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry time follows the backoff algorithm and has little effect on CSI. But it still has side effect on the scheduler, so I'll modify it.

provisionDiskTypes := []string{}
allTypes := deleteEmpty(strings.Split(diskVol.Type, ","))
if len(nodeSupportDiskType) != 0 {
provisionDiskTypes = intersect(nodeSupportDiskType, allTypes)
if len(provisionDiskTypes) == 0 {
log.Log.Errorf("CreateVolume:: node(%s) support type: [%v] is incompatible with provision disk type: [%s]", diskVol.NodeSelected, nodeSupportDiskType, allTypes)
return nil, nil, status.Errorf(codes.InvalidArgument, "CreateVolume:: node support type: [%v] is incompatible with provision disk type: [%s]", nodeSupportDiskType, allTypes)
return nil, nil, status.Errorf(codes.ResourceExhausted, "CreateVolume:: node support type: [%v] is incompatible with provision disk type: [%s]", nodeSupportDiskType, allTypes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can only reach this line if the CSI plugin is just installed, and scheduling is performed before UpdateNode() finished. Is that correct?

If the above is true, we can eliminate this kind of racing by using topology to constraint disk type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there are other scenarios. We'll leave it as it is for now.

@mowangdk
Copy link
Contributor Author

FYI, I have a private branch that uses topology to constraint disk type, which is waiting for the dependencies to be merged. By that way, we can avoid query for "node-selected" from APIServer.

Once I open that as a PR, it will overwrite the changes here.

It's okay. We'll go over your changes when you file your pr.

@mowangdk mowangdk merged commit c4d5c16 into kubernetes-sigs:master Oct 18, 2023
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants