Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: retry detach azure disk issue #78700

Merged
merged 1 commit into from Jun 7, 2019

Conversation

@andyzhangx
Copy link
Member

commented Jun 4, 2019

What type of PR is this?
/kind bug

What this PR does / why we need it:
When got <200, error> after detach disk operation, #78298 would retry detach disk, while it actually would not succeed since the "detaching" disk would not be in returned data disk list, this PR would try to only update vm if detach a non-existing disk.

Which issue(s) this PR fixes:

Fixes #78660

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

fix: retry detach azure disk issue
try to only update vm if detach a non-existing disk when got <200, error> after detach disk operation

/kind bug
/priority important-soon
/sig azure

@antoineco could you try this patch if possible, thanks.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Jun 4, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andyzhangx

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from karataliu and khenidak Jun 4, 2019

@andyzhangx andyzhangx force-pushed the andyzhangx:azuredisk-detach-retry branch from 655dd8b to 91f97d2 Jun 4, 2019

@antoineco

This comment has been minimized.

Copy link
Contributor

commented Jun 5, 2019

I deployed a build of controller-manager that includes this fix in all my affected clusters and will provide feedback as soon as one of them enters the disk detach backoff. It could take up to 2 days by experience.

@antoineco

This comment has been minimized.

Copy link
Contributor

commented Jun 5, 2019

I managed to reproduce and the backoff was not canceled this time 👍

However delays are still extremely long:

  • 15min until the first timeout
  • another 8min until attacherDetacher.AttachVolume starts

@andyzhangx is it worth opening a separate issue to discuss that?

controller-manager logs:

# detach starts

W0605 08:07:46.961316       1 reconciler.go:321] Multi-Attach error for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" Volume is already exclusively attached to node k8s-ch4ravjwt6ba-10586089-vmss00000y and can't be attached to another
I0605 08:07:46.961640       1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"zookeeper-2", UID:"00ecbb04-8769-11e9-a8c9-000d3a2adff3", APIVersion:"v1", ResourceVersion:"15569013", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" Volume is already exclusively attached to one node and can't be attached to another
I0605 08:07:48.796100       1 reconciler.go:229] attacherDetacher.DetachVolume started for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") on node "k8s-ch4ravjwt6ba-10586089-vmss00000y" 
I0605 08:07:48.804996       1 operation_generator.go:1221] Verified volume is safe to detach for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") on node "k8s-ch4ravjwt6ba-10586089-vmss00000y" 
I0605 08:07:48.813399       1 azure_controller_vmss.go:122] azureDisk - detach disk: name "" uri "/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb"
I0605 08:07:48.813475       1 azure_controller_vmss.go:152] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000y) - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)
I0605 08:08:19.770655       1 attacher.go:145] azureDisk - VolumesAreAttached: check volume "k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (specName: "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") is no longer attached
I0605 08:08:19.770696       1 operation_generator.go:193] VerifyVolumesAreAttached determined volume "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (spec.Name: "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") is no longer attached to node "k8s-ch4ravjwt6ba-10586089-vmss00000y", therefore it was marked as detached.

# failure and backoff after a 15 min timeout

I0605 08:22:49.318693       1 azure_controller_common.go:115] azureDisk - update backing off: detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb), err: azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded

# error is logged, backoff is not canceled

E0605 08:22:49.344509       1 azure_controller_vmss.go:131] detach azure disk: disk  not found, diskURI: /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb

# VMSS update is retried

I0605 08:22:49.344563       1 azure_controller_vmss.go:152] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000y) - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)

# 8 min until scaleSet.DetachDisk() returns, no backoff

E0605 08:30:22.745112       1 azure_controller_common.go:126] azureDisk - detach disk(, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb) failed, err: azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
E0605 08:30:22.745150       1 attacher.go:302] failed to detach azure disk "/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb", err azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
I0605 08:30:22.745164       1 attacher.go:305] azureDisk - disk:/subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb was detached from node:k8s-ch4ravjwt6ba-10586089-vmss00000y
E0605 08:30:22.745872       1 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\"" failed. No retries permitted until 2019-06-05 08:30:23.245832388 +0000 UTC m=+35632.251845028 (durationBeforeRetry 500ms). Error: "DetachVolume.Detach failed for volume \"pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\" (UniqueName: \"kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb\") on node \"k8s-ch4ravjwt6ba-10586089-vmss00000y\" : azure#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded"

# attach starts and succeeds

I0605 08:30:23.346056       1 reconciler.go:281] attacherDetacher.AttachVolume started for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" 
I0605 08:30:23.346441       1 azure_controller_vmss.go:87] azureDisk - update(ch4ravjwt6-cust-s1): vm(k8s-ch4ravjwt6ba-10586089-vmss00000v) - attach disk(k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb)
I0605 08:30:43.903669       1 azure_controller_vmss.go:97] azureDisk - attach disk(k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb, /subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb) succeeded
I0605 08:30:43.903794       1 operation_generator.go:332] AttachVolume.Attach succeeded for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/<subs>/resourceGroups/ch4ravjwt6-cust-s1/providers/Microsoft.Compute/disks/k8s-mstr-ch4ravjwt6-cust-s1--pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb") from node "k8s-ch4ravjwt6ba-10586089-vmss00000v" 
I0605 08:30:43.903959       1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"zookeeper-2", UID:"00ecbb04-8769-11e9-a8c9-000d3a2adff3", APIVersion:"v1", ResourceVersion:"15569013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulAttachVolume' AttachVolume.Attach succeeded for volume "pvc-2dacd69e-4ae4-11e9-a42a-000d3a2adffb" 
@andyzhangx

This comment has been minimized.

Copy link
Member Author

commented Jun 6, 2019

@antoineco It looks like after retry detach back off(although it still failed with time out), the attach disk operation could proceed, no need to do manual vm update now, right? 15min is long, while currently I have no idea what's the reasonable num for such timeout value.

@antoineco

This comment has been minimized.

Copy link
Contributor

commented Jun 6, 2019

Right, no need for a manual VMSS update.

@andyzhangx andyzhangx changed the title [WIP]fix: update vm if detach a non-existing disk fix: update vm if detach a non-existing disk Jun 6, 2019

@andyzhangx

This comment has been minimized.

Copy link
Member Author

commented Jun 6, 2019

/assign @feiskyer
Thanks for confirm, let me have another PR to address the long time out issue, let's merge this PR first.
@feiskyer is it possible to merge this PR into v1.15?

@andyzhangx andyzhangx changed the title fix: update vm if detach a non-existing disk fix: retry detach azure disk issue Jun 7, 2019

@andyzhangx

This comment has been minimized.

Copy link
Member Author

commented Jun 7, 2019

@antoineco after confirmation from azure compute team, the default 15min timeout value is short enough, we should not shorten that timeout value, if there is lots of such timeout issues, I think you may file an azure support ticket to find the root cause or mitigate, thanks.

@antoineco

This comment has been minimized.

Copy link
Contributor

commented Jun 7, 2019

Will do, thank you for asking.

@feiskyer

This comment has been minimized.

Copy link
Member

commented Jun 7, 2019

/lgtm
/prioritiy critical-urgent
/milestone v1.15

Added to v1.15 milestone since it's a critical bug fix.

@k8s-ci-robot k8s-ci-robot added the lgtm label Jun 7, 2019

@k8s-ci-robot k8s-ci-robot added this to the v1.15 milestone Jun 7, 2019

@feiskyer

This comment has been minimized.

Copy link
Member

commented Jun 7, 2019

/priority critical-urgent

@feiskyer feiskyer added this to In progress in Provider Azure via automation Jun 7, 2019

@k8s-ci-robot k8s-ci-robot merged commit 5959452 into kubernetes:master Jun 7, 2019

21 checks passed

cla/linuxfoundation andyzhangx authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details

Provider Azure automation moved this from In progress to Done Jun 7, 2019

k8s-ci-robot added a commit that referenced this pull request Jun 20, 2019
Merge pull request #78790 from andyzhangx/automated-cherry-pick-of-#7…
…8700-upstream-release-1.14

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
k8s-ci-robot added a commit that referenced this pull request Jun 21, 2019
Merge pull request #78791 from andyzhangx/automated-cherry-pick-of-#7…
…8700-upstream-release-1.13

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
k8s-ci-robot added a commit that referenced this pull request Jun 21, 2019
Merge pull request #78792 from andyzhangx/automated-cherry-pick-of-#7…
…8700-upstream-release-1.12

Automated cherry pick of #78700: fix: update vm if detach a non-existing disk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.