You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a CSPC configured with 5 instances, and a storage class defined with a replicaCount of 3. Each new PVC creates a new cstor volume config, and it seems like each new cstor volume config is leading to a new pod disruption budget that covers 3 of the 5 pool instances.
As a result, there are multiple pod disruption budgets that get created covering the pods (say my nodes are A, B, C, D and E, then a pdb gets created covering A B C, and the next volume claim for example creates a CVC with a PDB covering A, C E. Then I can't evict A, because This pod has more than one PodDisruptionBudget, which the eviction subresource does not support..
I am not entirely sure whether this is an issue with my own setup, and this is just an unexpected behaviour, in which case I'm wondering whether it's possible to recreate the disruption budgets in a way that avoids this, or if it's an actual issue of some sort.
I'm not sure whether k8s support for multiple pdbs is the answer either. This has been discussed there: kubernetes/kubernetes#90253.
The text was updated successfully, but these errors were encountered:
I have a CSPC configured with 5 instances, and a storage class defined with a replicaCount of 3. Each new PVC creates a new cstor volume config, and it seems like each new cstor volume config is leading to a new pod disruption budget that covers 3 of the 5 pool instances.
As a result, there are multiple pod disruption budgets that get created covering the pods (say my nodes are A, B, C, D and E, then a pdb gets created covering A B C, and the next volume claim for example creates a CVC with a PDB covering A, C E. Then I can't evict A, because
This pod has more than one PodDisruptionBudget, which the eviction subresource does not support.
.I am not entirely sure whether this is an issue with my own setup, and this is just an unexpected behaviour, in which case I'm wondering whether it's possible to recreate the disruption budgets in a way that avoids this, or if it's an actual issue of some sort.
I'm not sure whether k8s support for multiple pdbs is the answer either. This has been discussed there: kubernetes/kubernetes#90253.
The text was updated successfully, but these errors were encountered: