You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
node-1 is having 30Gi free space and node-2 having 50Gi
Now apply pvc with 40 Gi with storage-class for WaitForFirstConsumer and topology with only these 2 nodes.
pvc is pending now waiting for consumer
Drain the node2 which one is having free space > 40Gi
apply the busybox yaml with this pvc, now volume is trying to create but having Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required. error.
uncordon the node2
now pod and pvc both are in pending. will driver not try to provision on node2 rather than node1 ?
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 24m (x26 over 30m) persistentvolume-controller waiting for first consumer to be created before binding
Warning ProvisioningFailed 21m (x12 over 24m) local.csi.openebs.io_openebs-lvm-controller-1_b30b6ebc-490c-4b35-943e-9e117e9a5b19 failed to provision volume with StorageClass "sc-wfc": rpc error: code = ResourceExhausted desc = Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
- exit status 5
Normal ExternalProvisioning 5m41s (x94 over 24m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "local.csi.openebs.io" or manually created by system administrator
Normal Provisioning 4m3s (x89 over 24m) local.csi.openebs.io_openebs-lvm-controller-1_b30b6ebc-490c-4b35-943e-9e117e9a5b19 External provisioner is provisioning volume for claim "default/testpvc"
Normal WaitForPodScheduled 35s (x185 over 24m) persistentvolume-controller waiting for pod app-busybox-77b58cb5dd-wlv8g to be scheduled
pod describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m24s (x89 over 24m) default-scheduler running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "testpvc"
devuser@rack2:~/lvm$ kubectl get po,pvc
NAME READY STATUS RESTARTS AGE
pod/app-busybox-77b58cb5dd-wlv8g 0/1 Pending 0 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/testpvc Pending sc-wfc 31m
@w3aman It may or may not be scheduled on node-2 in above example. See it's not guaranteed that pod will be scheduled on the node having free storage space since the kube-scheduler score the nodes based on other parameters. It's not aware of storage capacity accessible from nodes at all. But the important part here is we can see that the pod is being retried for rescheduling. It may or may not land into the node having enough capacity to fit the pvc claim. Once we merge storage capacity tracking pull request (#21), this problem will perish.
In the case of waitforfirstconsumer, the pod will be scheduled by parameters, then the node will be selected by scheduler, but lvm did not participate in kubernetes scheduling. This means that the node selected by lvm and the one selected by kubernetes scheduler may not be the same.
What steps did you take and what happened:
Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
error.now pod and pvc both are in pending. will driver not try to provision on node2 rather than node1 ?
pod describe:
controller log :
node1-agent:
The text was updated successfully, but these errors were encountered: