New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete the out-of-tree PV labeler controller #74615
Delete the out-of-tree PV labeler controller #74615
Conversation
/sig storage |
/area cloudprovider |
/priority important-soon |
c7e843c
to
2901def
Compare
/approve (cross my fingers about the CI jobs!) |
/milestone v1.14 |
/assign @cheftako |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewsykim, dims, liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history for this PR. Silence the bot with an |
1 similar comment
/retest Review the full test history for this PR. Silence the bot with an |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Removes the PersistentVolume labeler controller. This is a controller that runs as part of the CCM (i.e. meant for out-of-tree providers). With Initializer support removed for v1.14, this controller is now effectively useless because it would only check for PVs with initializers enabled.
Some justifications for this change:
InitializerConfiguration
resource for PVs). For the few out-of-tree providers that have reported usage of this feature, we added corresponding volumes to the in-tree admission controller (vSphere Applies zone labels to newly created vsphere volumes #72687 & Cinder Add Cinder to PersistentVolumeLabel Admission Controller #73102)GetLabelsForVolume
. This is a wrong assumption (at least in comparison to the in-tree admission controller) and could result in a segfault.PV.NodeAffinity
rather than labels for zone topology scheduling. SIG Cloud Provider & SIG Storage will be working on a longer-term out-of-tree solution for this in v1.15.Which issue(s) this PR fixes:
Part of kubernetes/cloud-provider#4
Special notes for your reviewer:
More discussions around the intentions of this PR in kubernetes/cloud-provider#4
SIG cloud provider mailing list thread: https://groups.google.com/forum/#!topic/kubernetes-sig-cloud-provider/jZUB-Qk4S-M
Does this PR introduce a user-facing change?: