-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capacity tracking support #89
base: master
Are you sure you want to change the base?
Conversation
pkg/lvm/lvm.go
Outdated
provisionerPod, deleteFunc, err := createPod( | ||
ctx, | ||
"sh", | ||
[]string{"-c", fmt.Sprintf("pvdisplay %s %s %s", va.devicesPattern, "--units=B", "-C")}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO it would be much better to execute
pvs --reportformat json
and parse the output into a go struct instead of using regex
Nice ! |
Thanks! Will get the comments actioned and add a test that asserts the storage capacity objects get created. |
Got this updated and tested in-cluster. Couldn't unfortunately add tests as I realised it depended on the helm chart changes to enable capacity tracking. |
having tests for this would be very much appreciated |
@majst01 since the tests install the helm chart and for capacity tracking to be enabled it requires the helm chart changes as per this PR: How would you propose to add tests? One solution would be to disable capacity tracking in that PR by default, then enable it in the values file in this repo for testing. |
Hey @jleeh, having the feature disabled by default and then enable it for integration testing in this repository sounds good to me. I just allowed your PR in the helm repo to be built such that there will also be a PR helm repo available that you can use for trying out your changes ( |
a5241b2
to
e0765c4
Compare
Enable storage capacity tracking as per the CSI documentation:
https://kubernetes-csi.github.io/docs/storage-capacity-tracking.html
While using this CSI driver in-cluster, I hit issues where pods would be attempted to be scheduled on nodes that didn't have enough disk space remaining to satisfy the request while the topology requirements kept the same so the provisioner would get in an endless error loop.
Since capacity requests aren't supported in the node server, this takes the same approach to provisioning in which a pod is created on the specified node to check remaining capacity based on the device pattern.
Tested in-cluster and is working.