Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volume is in pendig state in case of waitforfirstconsumer #31

Open
w3aman opened this issue Mar 5, 2021 · 3 comments
Open

volume is in pendig state in case of waitforfirstconsumer #31

w3aman opened this issue Mar 5, 2021 · 3 comments

Comments

@w3aman
Copy link
Contributor

w3aman commented Mar 5, 2021

What steps did you take and what happened:

  • node-1 is having 30Gi free space and node-2 having 50Gi
  • Now apply pvc with 40 Gi with storage-class for WaitForFirstConsumer and topology with only these 2 nodes.
  • pvc is pending now waiting for consumer
  • Drain the node2 which one is having free space > 40Gi
  • apply the busybox yaml with this pvc, now volume is trying to create but having Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required. error.
  • uncordon the node2

now pod and pvc both are in pending. will driver not try to provision on node2 rather than node1 ?

Events:
  Type     Reason                Age                 From                                                                                Message
  ----     ------                ----                ----                                                                                -------
  Normal   WaitForFirstConsumer  24m (x26 over 30m)  persistentvolume-controller                                                         waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    21m (x12 over 24m)  local.csi.openebs.io_openebs-lvm-controller-1_b30b6ebc-490c-4b35-943e-9e117e9a5b19  failed to provision volume with StorageClass "sc-wfc": rpc error: code = ResourceExhausted desc =   Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
 - exit status 5
  Normal  ExternalProvisioning  5m41s (x94 over 24m)  persistentvolume-controller                                                         waiting for a volume to be created, either by external provisioner "local.csi.openebs.io" or manually created by system administrator
  Normal  Provisioning          4m3s (x89 over 24m)   local.csi.openebs.io_openebs-lvm-controller-1_b30b6ebc-490c-4b35-943e-9e117e9a5b19  External provisioner is provisioning volume for claim "default/testpvc"
  Normal  WaitForPodScheduled   35s (x185 over 24m)   persistentvolume-controller                                                         waiting for pod app-busybox-77b58cb5dd-wlv8g to be scheduled

pod describe:

Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  4m24s (x89 over 24m)  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "testpvc"
devuser@rack2:~/lvm$ kubectl get po,pvc
NAME                               READY   STATUS    RESTARTS   AGE
pod/app-busybox-77b58cb5dd-wlv8g   0/1     Pending   0          24m
NAME                            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/testpvc   Pending                                      sc-wfc         31m
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-wfc
allowVolumeExpansion: true
parameters:
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
    - k8s-node1
    - k8s-node2

controller log :

- exit status 5
I0305 09:37:11.908970       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateVolume requests {"accessibility_requirements":{"preferred":[{"segments":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k8s-node1","kubernetes.io/os":"linux","openebs.io/nodename":"k8s-node1"}}],"requisite":[{"segments":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"k8s-node1","kubernetes.io/os":"linux","openebs.io/nodename":"k8s-node1"}}]},"capacity_range":{"required_bytes":42949672960},"name":"pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a","csi.storage.k8s.io/pvc/name":"testpvc","csi.storage.k8s.io/pvc/namespace":"default","volgroup":"lvmvg"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I0305 09:37:11.928106       1 controller.go:200] scheduling the volume lvmvg/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a on node k8s-node1
I0305 09:37:11.935291       1 volume.go:89] provisioned volume pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a
I0305 09:37:12.958891       1 volume.go:99] deprovisioned volume pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a
E0305 09:37:13.977133       1 grpc.go:79] GRPC error: rpc error: code = ResourceExhausted desc =   Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
- exit status 5

node1-agent:

E0305 09:37:54.000400       1 lvm_util.go:116] lvm: could not create volume lvmvg/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a cmd [-L 42949672960b -n pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a lvmvg] error:   Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
I0305 09:37:54.007690       1 volume.go:265] Successfully synced 'openebs/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a'
I0305 09:37:54.961976       1 volume.go:155] Got update event for deleted Vol lvmvg/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a
I0305 09:37:54.962038       1 lvm_util.go:135] lvm: volume (lvmvg/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a) doesn't exists, skipping its deletion
I0305 09:37:54.973385       1 volume.go:180] Got delete event for Vol lvmvg/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a
I0305 09:37:54.973979       1 volume.go:265] Successfully synced 'openebs/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a'
E0305 09:37:54.974058       1 volume.go:51] lvmvolume 'openebs/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a' has been deleted
I0305 09:37:54.974083       1 volume.go:265] Successfully synced 'openebs/pvc-f595653e-8e3b-4bd9-b808-90bc341d4c3a'
@w3aman
Copy link
Contributor Author

w3aman commented Mar 5, 2021

cc: @iyashu

@iyashu
Copy link
Contributor

iyashu commented Mar 5, 2021

@w3aman It may or may not be scheduled on node-2 in above example. See it's not guaranteed that pod will be scheduled on the node having free storage space since the kube-scheduler score the nodes based on other parameters. It's not aware of storage capacity accessible from nodes at all. But the important part here is we can see that the pod is being retried for rescheduling. It may or may not land into the node having enough capacity to fit the pvc claim. Once we merge storage capacity tracking pull request (#21), this problem will perish.

@pawanpraka1 pawanpraka1 added this to To do in LVM Local PV Mar 18, 2021
@pawanpraka1 pawanpraka1 moved this from To do to Near term in LVM Local PV Mar 18, 2021
@zwForrest
Copy link

In the case of waitforfirstconsumer, the pod will be scheduled by parameters, then the node will be selected by scheduler, but lvm did not participate in kubernetes scheduling. This means that the node selected by lvm and the one selected by kubernetes scheduler may not be the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
LVM Local PV
  
Near term
Development

No branches or pull requests

3 participants