-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume should not be provisioned on the nodes like master node which are marked as noScheduled #47
Comments
Hi we have the same problem. |
@freym, thanks for trying out zfs-localpv. Could you please explain more about your use case and why you want to use different key? Right now, we support only one topology key "kubernetes.io/hostname". We can avoid provisioning of PV on the master using fowllowing 2 ways
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
parameters:
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- worker-1
- worker-2
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
parameters:
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer You can use either of the above approach to solve this issue. |
Yes both workarounds help to solve the problem: "Avoid provisioning of PV on the master". (The first solution is a bit cumbersome if you have a lot hosts or the number changes constantly when you use a dynamic provisioner) We have two other use cases:
(I also just want to say that this project is really great and really helps here in our work 👍) |
@freym sorry for the delay. There are two things probably we can do (after discussing it internally and with CSI community)
|
@freym The PR to support custom topology has been merged. You can follow the steps mentioned here to add custom topology key : https://github.com/openebs/zfs-localpv/blob/master/docs/faq.md#6-how-to-add-custom-topology-key. @w3aman We have fixed the issue(via #101) where we have to list all the worker nodes in the storageclass to make ZFS-LocalPV driver not to create any PV on the master node. Now you don't need mention workers in the storageclass or use waitforfirstconsumer to solve this. Can you please try out and see if that is working? |
Cool...i will try this out ! |
Thx. I will try this also. (I can't say when I'm gonna do this because of covid-19) |
Thanks @freym. I have a favor to ask for you: can you sign the adopters file for us and detail your use-case a bit? Here is the adopters file where a lot of users have already signed it : openebs/openebs#2719. |
I tried this scenario with both v0.6.x and ci image. Both the time i kept master node with NoSchedule taint. And in PVC spec i used immediate as volume binding mode. ci images solved this issue, as master was on noSchedule taint it didn't provisioned the volume there. with v0.6.x:
But with CI images volumes were provisioning on the worker nodes only.
Note: Here, because of zfs-schedular volume-count based scheduling is done. so in (1 master + 3 worker) cluster provisioning of 4 volumes should consider all 4 nodes one by one but because of the taints, master was not the suitable candidate for provisioning volume so the 4th one goes with node-1 again. |
/issue has been fixed and verified. |
what happened:
what you expected to happen
The text was updated successfully, but these errors were encountered: