Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MountVolume.WaitForAttach failed - strconv.Atoi: parsing "": invalid syntax #761

Closed
derekperkins opened this issue Dec 18, 2018 · 11 comments

Comments

@derekperkins
Copy link
Contributor

I currently have this error on 2 separate disks after upgrading a few stateful sets.

MountVolume.WaitForAttach failed for volume "pvc-abc-123" : strconv.Atoi: parsing "": invalid syntax

I'm running 1.11.5 in centralus.

cc @andyzhangx

@derekperkins
Copy link
Contributor Author

Here's the PV object in question

{
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "pvc-abc-123",
    "selfLink": "/api/v1/persistentvolumes/pvc-abc-123",
    "uid": "abc-123",
    "resourceVersion": "21262284",
    "creationTimestamp": "2018-12-04T21:52:58Z",
    "annotations": {
      "pv.kubernetes.io/bound-by-controller": "yes",
      "pv.kubernetes.io/provisioned-by": "kubernetes.io/azure-disk",
      "volumehelper.VolumeDynamicallyCreatedByKey": "azure-disk-dynamic-provisioner"
    },
    "finalizers": [
      "kubernetes.io/pv-protection"
    ]
  },
  "spec": {
    "capacity": {
      "storage": "256Gi"
    },
    "azureDisk": {
      "diskName": "kubernetes-dynamic-pvc-abc-123",
      "diskURI": "/subscriptions/qwerty/resourceGroups/MC_nozzle/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-abc-123",
      "cachingMode": "None",
      "fsType": "",
      "readOnly": false,
      "kind": "Managed"
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "claimRef": {
      "kind": "PersistentVolumeClaim",
      "namespace": "vitess",
      "name": "vtdataroot-az-domains-x-80-replica-0",
      "uid": "123",
      "apiVersion": "v1",
      "resourceVersion": "21262095"
    },
    "persistentVolumeReclaimPolicy": "Delete",
    "storageClassName": "managed-premium"
  },
  "status": {
    "phase": "Bound"
  }
}

@andyzhangx
Copy link
Contributor

could your run kubectl get no NODE-NAME -o yaml and paste the attachVolumes, your disk(pvc-abc-123) LUN is empty which is quite weird, is that a disk created by yourself?

@andyzhangx
Copy link
Contributor

could you also follow this guide to provide info from that agent VM, thanks:
https://github.com/andyzhangx/demo/blob/master/linux/azuredisk/azuredisk-attachment-debugging.md

@derekperkins
Copy link
Contributor Author

derekperkins commented Dec 18, 2018

These were created by a PVC. Device 2 is the disk that isn't working.

  nodeInfo:
    architecture: amd64
    bootID: 274cb9bf-7ce3-4428-ba74-04daf228f67b
    containerRuntimeVersion: docker://1.13.1
    kernelVersion: 4.15.0-1030-azure
    kubeProxyVersion: v1.11.5
    kubeletVersion: v1.11.5
    machineID: d22e87dff97f4cb1ad2aa152d3ed9ba6
    operatingSystem: linux
    osImage: Ubuntu 16.04.5 LTS
    systemUUID: 8971D27B-6459-464D-A58A-DA9FE2A7DC45
  volumesAttached:
  - devicePath: "0"
    name: kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-97039e42-b3ec-11e8-bf5c-0a58ac1f0f2f
  - devicePath: "1"
    name: kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-ef1a2bf0-f75d-11e8-868c-0a58ac1f1065
  - devicePath: "2"
    name: kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-f39b5046-f80e-11e8-a49c-0a58ac1f06f6
  - devicePath: "3"
    name: kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-c31cfc58-fd9f-11e8-a49c-0a58ac1f06f6
  volumesInUse:
  - kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-97039e42-b3ec-11e8-bf5c-0a58ac1f0f2f
  - kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-c31cfc58-fd9f-11e8-a49c-0a58ac1f06f6
  - kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-ef1a2bf0-f75d-11e8-868c-0a58ac1f1065
  - kubernetes.io/azure-disk//subscriptions/389d0528-6d41-460e-961d-4ee3dc3f9a15/resourceGroups/MC_nozzle-central_nzcentral_centralus/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-f39b5046-f80e-11e8-a49c-0a58ac1f06f6

@andyzhangx
Copy link
Contributor

what's the error of device 2(kubernetes-dynamic-pvc-f39b5046-f80e-11e8-a49c-0a58ac1f06f6) then? could you paste the full pod info? by kubectl describe po POD-NAME

@andyzhangx

This comment has been minimized.

@jungopro
Copy link

hello @andyzhangx, I believe I'm seeing similar issues with my cluster as well
see details here
I tried deploying several statefulsets with no luck. It looks like the pvc is creating the pv in K8s and the disk is created in Azure but doesn't get attached to the node

I'd appreciate any assistance

@andyzhangx
Copy link
Contributor

@jungopro Your issue is related to this: #477 (comment), v1.11.6 would fix this issue

@derekperkins
Copy link
Contributor Author

@andyzhangx helped me track down the root cause of the issue to kubernetes/kubernetes#67342. The fix has landed in upstream 1.13, but hasn't been cherry-picked back to 1.12 or 1.11 yet.

Thanks for your help @andyzhangx!

@derekperkins
Copy link
Contributor Author

For anyone finding this issue in the future, the workaround is to just delete the pod. It should successfully mount the disk after one or two tries.

@andyzhangx
Copy link
Contributor

FYI. someone hit this issue again, paste the fixed version:

k8s version fixed version
v1.10 no fix
v1.11 1.11.7
v1.12 1.12.5
v1.13 no such issue

@ghost ghost locked as resolved and limited conversation to collaborators Jul 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants