Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: retry detach azure disk issue #78700

Merged
merged 1 commit into from
Jun 7, 2019

Conversation

andyzhangx
Copy link
Member

@andyzhangx andyzhangx commented Jun 4, 2019

What type of PR is this?
/kind bug

What this PR does / why we need it:
When got <200, error> after detach disk operation, #78298 would retry detach disk, while it actually would not succeed since the "detaching" disk would not be in returned data disk list, this PR would try to only update vm if detach a non-existing disk.

Which issue(s) this PR fixes:

Fixes #78660

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

fix: retry detach azure disk issue
try to only update vm if detach a non-existing disk when got <200, error> after detach disk operation

/kind bug
/priority important-soon
/sig azure

@antoineco could you try this patch if possible, thanks.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. release-note Denotes a PR that will be considered when it comes time to generate release notes. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/azure area/cloudprovider sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jun 4, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andyzhangx

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 4, 2019
@antoineco
Copy link
Contributor

I deployed a build of controller-manager that includes this fix in all my affected clusters and will provide feedback as soon as one of them enters the disk detach backoff. It could take up to 2 days by experience.

@antoineco
Copy link
Contributor

antoineco commented Jun 5, 2019

I managed to reproduce and the backoff was not canceled this time 👍

However delays are still extremely long:

  • 15min until the first timeout
  • another 8min until attacherDetacher.AttachVolume starts

@andyzhangx is it worth opening a separate issue to discuss that?

controller-manager logs:

# detach starts

W0605 08:07:46.961316       1 reconciler.go:321] Multi-Attach error for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" Volume is already exclusively attached to node k8s-ch4ravjwt6ba-10586089-vmss00000y and can't be attached to another
I0605 08:07:46.961640       1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"zookeeper-2", UID:"00ecbb04-8769-11e9-a8c9-000d3a2adff3", APIVersion:"v1", ResourceVersion:"15569013", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" Volume is already exclusively attached to one node and can't be attached to another
I0605 08:07:48.796100       1 reconciler.go:229] attacherDetacher.DetachVolume started for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") on node "k8s-ch4ravjwt6ba-10586089-vmss00000y" 
I0605 08:07:48.804996       1 operation_generator.go:1221] Verified volume is safe to detach for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") on node "k8s-ch4ravjwt6ba-10586089-vmss00000y" 
I0605 08:07:48.813399       1 azure_controller_vmss.go:122] azureDisk - detach disk: name "" uri "/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb"
I0605 08:07:48.813475       1 azure_controller_vmss.go:152] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000y) - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)
I0605 08:08:19.770655       1 attacher.go:145] azureDisk - VolumesAreAttached: check volume "k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (specName: "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") is no longer attached
I0605 08:08:19.770696       1 operation_generator.go:193] VerifyVolumesAreAttached determined volume "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (spec.Name: "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") is no longer attached to node "k8s-ch4ravjwt6ba-10586089-vmss00000y", therefore it was marked as detached.

# failure and backoff after a 15 min timeout

I0605 08:22:49.318693       1 azure_controller_common.go:115] azureDisk - update backing off: detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb), err: azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded

# error is logged, backoff is not canceled

E0605 08:22:49.344509       1 azure_controller_vmss.go:131] detach azure disk: disk  not found, diskURI: /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb

# VMSS update is retried

I0605 08:22:49.344563       1 azure_controller_vmss.go:152] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000y) - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)

# 8 min until scaleSet.DetachDisk() returns, no backoff

E0605 08:30:22.745112       1 azure_controller_common.go:126] azureDisk - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb) failed, err: azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
E0605 08:30:22.745150       1 attacher.go:302] failed to detach azure disk "/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb", err azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
I0605 08:30:22.745164       1 attacher.go:305] azureDisk - disk:/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb was detached from node:k8s-ch4ravjwt6ba-10586089-vmss00000y
E0605 08:30:22.745872       1 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\"" failed. No retries permitted until 2019-06-05 08:30:23.245832388 +0000 UTC m=+35632.251845028 (durationBeforeRetry 500ms). Error: "DetachVolume.Detach failed for volume \"pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\" (UniqueName: \"kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\") on node \"k8s-ch4ravjwt6ba-10586089-vmss00000y\" : azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded"

# attach starts and succeeds

I0605 08:30:23.346056       1 reconciler.go:281] attacherDetacher.AttachVolume started for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" 
I0605 08:30:23.346441       1 azure_controller_vmss.go:87] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000v) - attach disk(k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)
I0605 08:30:43.903669       1 azure_controller_vmss.go:97] azureDisk - attach disk(k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb) succeeded
I0605 08:30:43.903794       1 operation_generator.go:332] AttachVolume.Attach succeeded for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" 
I0605 08:30:43.903959       1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"zookeeper-2", UID:"00ecbb04-8769-11e9-a8c9-000d3a2adff3", APIVersion:"v1", ResourceVersion:"15569013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulAttachVolume' AttachVolume.Attach succeeded for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" 

@andyzhangx
Copy link
Member Author

@antoineco It looks like after retry detach back off(although it still failed with time out), the attach disk operation could proceed, no need to do manual vm update now, right? 15min is long, while currently I have no idea what's the reasonable num for such timeout value.

@antoineco
Copy link
Contributor

Right, no need for a manual VMSS update.

@andyzhangx andyzhangx changed the title [WIP]fix: update vm if detach a non-existing disk fix: update vm if detach a non-existing disk Jun 6, 2019
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 6, 2019
@andyzhangx
Copy link
Member Author

/assign @feiskyer
Thanks for confirm, let me have another PR to address the long time out issue, let's merge this PR first.
@feiskyer is it possible to merge this PR into v1.15?

@andyzhangx andyzhangx changed the title fix: update vm if detach a non-existing disk fix: retry detach azure disk issue Jun 7, 2019
@andyzhangx
Copy link
Member Author

@antoineco after confirmation from azure compute team, the default 15min timeout value is short enough, we should not shorten that timeout value, if there is lots of such timeout issues, I think you may file an azure support ticket to find the root cause or mitigate, thanks.

@antoineco
Copy link
Contributor

Will do, thank you for asking.

@feiskyer
Copy link
Member

feiskyer commented Jun 7, 2019

/lgtm
/prioritiy critical-urgent
/milestone v1.15

Added to v1.15 milestone since it's a critical bug fix.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 7, 2019
@k8s-ci-robot k8s-ci-robot added this to the v1.15 milestone Jun 7, 2019
@feiskyer
Copy link
Member

feiskyer commented Jun 7, 2019

/priority critical-urgent

@k8s-ci-robot k8s-ci-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Jun 7, 2019
@feiskyer feiskyer added this to In progress in Provider Azure via automation Jun 7, 2019
@k8s-ci-robot k8s-ci-robot merged commit 5959452 into kubernetes:master Jun 7, 2019
Provider Azure automation moved this from In progress to Done Jun 7, 2019
k8s-ci-robot added a commit that referenced this pull request Jun 20, 2019
…8700-upstream-release-1.14

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
k8s-ci-robot added a commit that referenced this pull request Jun 21, 2019
…8700-upstream-release-1.13

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
k8s-ci-robot added a commit that referenced this pull request Jun 21, 2019
…8700-upstream-release-1.12

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cloudprovider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
Development

Successfully merging this pull request may close these issues.

Azure Disk detach backoff cancelled due to error "detach azure disk failure, disk not found"
4 participants