Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No volume created with binding mode "WaitForFirstConsumer" #400

Closed
dalbani opened this issue Nov 15, 2021 · 4 comments
Closed

No volume created with binding mode "WaitForFirstConsumer" #400

dalbani opened this issue Nov 15, 2021 · 4 comments

Comments

@dalbani
Copy link

dalbani commented Nov 15, 2021

Hello,

I wanted to deploy a multi-node workload with anti-affinity rules, so I supposed that I needed to use a storage class with volumeBindingMode: WaitForFirstConsumer, right?
As mentioned on https://github.com/openebs/zfs-localpv#scheduler:

If you want to use node selector/affinity rules on the application pod, or have cpu/memory constraints, kubernetes scheduler should be used. To make use of kubernetes scheduler, you can set the volumeBindingMode as WaitForFirstConsumer in the storage class. This will cause a delayed binding, i.e kubernetes scheduler will schedule the application pod first and then it will ask the ZFS driver to create the PV. The driver will then create the PV on the node where the pod is scheduled.

But if I do that, the 3 ReadWriteOnce PVCs stay on phase: Pending and no PV are created.
And the 3 pods report: 0/3 nodes are available: 3 node(s) did not have enough free storage.

Environment: 3 node Microk8s cluster (1.22) with ZFS LocalPV (1.9.8).
Workload: stateful set with requiredDuringSchedulingIgnoredDuringExecution anti-affinity rule on topologyKey: kubernetes.io/hostname.

I'm pretty sure that I'm doing something wrong here, but I don't know what.
How could I debug it what's going on by the way?
Thanks for your help!

@dalbani
Copy link
Author

dalbani commented Nov 15, 2021

While searching for WaitForFirstConsumer in the bug list, I saw: #392.
My problem is thus probably related to capacity:

kubectl get -A CSIStorageCapacity -o custom-columns=NODE:nodeTopology.matchLabels.kubernetes\\.io/hostname,NAME:metadata.name,CAPACITY:capacity
NODE              NAME          CAPACITY
dedibox-one       csisc-jmxgm   0
dedibox-two       csisc-2zdqc   0
dedibox-three     csisc-mw62x   0

The underlying problem has apparently been fixed last month with #393.
Is there a way I could try with a non-released Docker image of openebs/zfs-driver?
I don't see any recent build on https://hub.docker.com/r/openebs/zfs-driver/tags in particular.

@dalbani
Copy link
Author

dalbani commented Nov 15, 2021

Well, using the ci tag of the openebs/zfs-driver image, the appropriate capacity is returned.
And the PVs get created as and where they should.
I'll do some more testing, but it looks like my issue is (already) fixed.

@pawanpraka1
Copy link
Contributor

@dalbani, would you like to mention your use case in our Adopters.md openebs/openebs#2719.

@kmova kmova added this to Waiting on User/Contributor in 3.1 Release Tracker Nov 23, 2021
@kmova kmova moved this from Waiting on User/Contributor to RC1 in 3.1 Release Tracker Nov 23, 2021
@kmova kmova moved this from RC1 to Release Items in 3.1 Release Tracker Nov 23, 2021
@pawanpraka1
Copy link
Contributor

fixed via #393

3.1 Release Tracker automation moved this from Release Items to Done Jan 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

2 participants