-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added nodeVolumeAttachLimit #160
added nodeVolumeAttachLimit #160
Conversation
@mganter Thank you for your contribution. |
Thank you @mganter for your contribution. Before I can start building your PR, a member of the organization must set the required label(s) {'reviewed/ok-to-test'}. Once started, you can check the build status in the PR checks section below. |
Hi @mganter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mganter
I had a brief look. Can you elaborate a little bit on the reasons why the amount of volumes per node need to be limited?
Should that not rather be configured for each Shoot
cluster individually instead of globally via the CloudProfile
?
The limit exists inside of OpenStack. With this option you can tell the scheduler, that he shall not schedule more cinder volumes to a specific node than the specified amount. If the scheduler does't know it, he will schedule pods to nodes where they can't be satisfied in terms of volumes. So pods will remain in container creation state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explanation. I played a little bit around with it and configured a limit of just two disks per node.
If I try to schedule more pods with disks to a node the scheduler tells:
I1019 19:50:13.426597 1 factory.go:445] "Unable to schedule pod; no fit; waiting" pod="default/some-persistence-2" err="0/1 nodes are available: 1 node(s) exceed max volume count."
Also there is an event which indicate the same:
2m34s Warning FailedScheduling pod/some-persistence-2 0/1 nodes are available: 1 node(s) exceed max volume count.
The pr looks overall very nice. The tests for the valuesprovider/controlplane controller needs to be adapted. They fail because the new property nodeVolumeAttachLimit
is missing here: https://github.com/gardener/gardener-extension-provider-openstack/blob/master/pkg/controller/controlplane/valuesprovider_test.go#L247-L262
We should also mention in the operator docs that this feature will only be available when using CSI.
Added docs. Updated test. |
/reviewed ok-to-test |
@mganter can you check the failing CI? probably you need to run |
@mganter The pull request was assigned to you under |
Some system information: Do you know if there are any issues with that? |
Now there is a conflict.
Hm, good question, can you run |
The conflict is resolved now. (Didn't have the changes from 1 hour ago) |
Thanks @mganter! |
How to categorize this PR?
/area robustness
/area storage
/kind enhancement
/priority normal
/platform openstack
What this PR does / why we need it:
The introduced flag tells the CSI-Driver, that there is a upper limit for mounted volumes on a node.
Release note: