-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controller selectors should be immutable after creation #50808
Comments
Removing defaulting is a breaking API change? Can you link the issue where we removed it? |
@smarterclayton The server-side defaulting was removed for |
For that to work, you would also have to make the template labels immutable, which is another significant change (updating template labels is currently allowed).
I don't see a clear way to enforce immutability for the v1 API only, while staying compatible with the existing beta API. |
@smarterclayton defaulting is removed only in apps/v1beta2 not in apps/v1beta1. As you say, that is a breaking change that we can't make to the existing group version. @liggitt if we make the selector immutable, and require via validation (as we do now) that the labels match the provided selector for the object to be valid, we get the desired behavior. This is a less strict requirement than making the labels immutable. Wrt #26202, it contains a long discussion about the problems regarding selector mutation and links to other relevant issues. @bgrant0607's @smarterclayton's comments in this thread are still as true right now as they were at the time the issue was opened. Our controllers were not designed to handle selector changes gracefully, and whatever the user is attempting to achieve by mutating the selector they could achieve by other means (mostly). I'm not trying to say that we should not support mutable selectors in the future, but we don't support it gracefully now, and we don't have a design for what the exact semantics of selector mutation are (and they may not uniform across all our controllers). If we make the selectors immutable now, the community has time to offer designs for how selector mutations should work (assuming this feature generates enough interest), and we can make them mutable without breaking backward compatibility. If we advance to apps/v1 with mutable selectors, we have to support the current undefined semantics for the lifetime of that group version. To be clear we want to implement selector immutability in such a way apps/v1beta1 is not affected, and only API versions greater than this have immutable selectors. |
@smarterclayton issue for defaulting #50339 |
@liggitt @smarterclayton @Kargakis @mikedanese @mfojtik @tnozicka |
First of all, I'd like to find a way to make this happen. I think it will be less error-prone for users. I think there are some patterns we will want to allow, such as allowing new Note that we still intend to make validation versioned eventually: #8550 Also, if we don't want to break old API versions now, we'd still need to allow mutation via old API paths, which means that clients of the new API will potentially observe changes but not be able to make them. I think that's fine, because we should clearly document that observers should be able to deal with selector changes, which will be necessary if we plan to allow mutations again in the future. |
A couple of thoughts. Because this is so important, and we can't really undo it, and we will be breaking existing users, even if we say it's a new api version, because it's basically changing the existing objects, we should at least lay out all the arguments for and against together in one spot (this issue is fine) and we should be sure we are completely comfortable with the ramifications.
My primary concern is 2 - breaking real users. If we break more users than we help, we need to have a good justification (which might be as simple as "selector mutation is catastrophic, this is merely workflow inconvenient") and that we document it here. |
One more
|
Another approach, which we've used with other features, is that we could make arbitrary mutation optional, enabled by default in the old API version and disabled by default in the new one. Spreading updates across multiple updates is not a precedent I want to set for the API. |
Requiring delete-then-recreate in order to change labels on the produced child objects can be really disruptive:
agree, that only slightly helps accidental cases, and encourages anti-patterns (update! update again!) |
I'm going to partially take back something I just wrote: My main concern with mutability is that we don't make it easy to accidentally orphan pods. We already have validation that the selector actually matches the pod template, so orphaning can occur when pod template labels and the selector are changed at the same time. We could validate that the new selector matches the old pod template labels as well as the new ones and that if the pod template labels aren't being changed that all replicas are "fully labeled" at the current spec generation, which is already reported by the controller in status. Any change to labels would also reset that to zero. If the controller clobbered that, it would also clobber observedGeneration, which would cause the check above to fail on subsequent updates. That would enable safe selector mutations, such as adding not clauses. It would also happen to allow 2-part updates, but we don't necessarily have to advertise that. Cascading deletion + recreate (which should be doable automatically with one of the apply brute-force flags) is easier for users without high availability needs. |
I'm not fond of the idea of gating spec changes on status, though. This violates my "K8s APIs don't go out to lunch" principle. |
How would this along help with adding |
In case it affects any opinions on this discussion, I think there is a way to get a real validation error (instead of silently dropping the update) and have it only affect the new |
Yes, we would have to initialize the rest storage for the resource differently for apps/v1beta1 and apps/v1beta2. |
I think what @enisoc is saying that we should return an error from |
@xiang90 I have advocated usage of the |
Yes, we would be able to do that differently based on API version by having a storage that varies based on API version, which is the only way to vary validation today. |
@bgrant0607 thanks. now i understand the use case better. |
The extensions/v1beta1 and apps/v1beta1 group versions implement inconsistent semantics for selector mutation across the workloads API surface. Let's leave out batch/v1/Job (which implements something completely different as well).
For Deployments, we know that we don't handle mutations to the selector and labels gracefully (#24888, #45770), and we strongly caution against its use in the documentation. For StatefulSet, the orphaning allowed by mutable selectors would likely lead to a condition where the controller dead locks and leaves the processes of the distributed system it manages in an undefined state. Adopting the semantics of Deployment and DaemonSet would be equally risky with graver consequences. StatefulSet already has an auto-pause like feature (partition) for its RollingUpdates, and this allows it to support promotable canaries. For DaemonSet, the primary use case is to create Pods on a set of selected Nodes. We handle node selector and node label changes gracefully, and you can use these features to canary a Daemon on set of Nodes before rolling it out to the fleet. We might, in the future, consider promotable canaries via an For the v1beta2 api surface, if our aim is consistency, we should probably start with a uniform default behavior for selector mutation for the entire surface. There are four broad categories of options available.
(1) is irrevocable. If we do this, we live with consequences for the lifetime of the API surface. We know mutating selectors isn't the suggested way to do anything for Deployments, that its error prone, and that its confusing to users. (3) or (4) would be less error prone, but which, if either, should we implement? Is there a better feature set that we could ship (e.g. auto-pause for promotable canaries and/or orchestrated, non-orphaning selector mutation and Pod relabeling) that would eliminate the need for either? If we start with (2), we have 9 months to a year before the complete removal of extension/v1beta2 (assuming we can remove it that quickly) in which to assess feedback and demand signals with respect to right feature set to add to the API surface. If promotable canaries is the primary use case that we need to support, we should probably invest in auto-pause over selector mutation. If relabeling is what's required, we should invest in orchestrated, non-orphaning selector mutation and Pod relabeling and solve the problem in the general case. If some version of (3) or (4) is what users really need, we can implement that in a backward compatible way.
If users can't consume the API due to selector immutability than we need to promote features that enable their use cases. If we start with immutability, we can assess requirements and demand, and implement features to better meet the needs of users that currently depend on selector mutation. That may mean implementing restricted or opt-in mutability, or it may mean delivering an orthogonal feature set that meets their needs.
extensions/v1beta1 and apps/v1beta1, if deprecated immediately by apps/v1beta2, has a minimum of 3 release cycles prior to removal. Its unlikely that we can completely remove extensions/v1beta1 that quickly. That gives us at least nine months to ensure that, if we are able to promote a v1 API surface in that time, we have the correct features in place to support migration of the earliest adopters to that surface.
The downside to allowing non-orphaning mutation is that its very difficult to tell, based on the workload specification, which Pods have labels that correspond to which selectors. In general, if we don't have demand signals that indicate if opt-in mutability, or some version of restricted mutability, solves most of the use cases for users that are updating their selectors and labels, and if we invest in the wrong feature, at a stability other than alpha, we're on the hook for supporting it, and it may not meet the needs of our users.
As @LiGgit points out, delete-create is a disruptive approach, but we could support create-delete (blue/green) for users that are willing to spend some core hours for the additional availability during the update. In general, if we have a strong demand for promotable canaries or orchestrated, non-orphaning selector mutation and Pod relabeling, we should implement it. If users need opt-in mutability or restricted mutability, we should implement one of those.
Concur, but blocking label updates to ReplicaSets or ControllerRevisions is almost certainly too restrictive.
We have options to fix this in a consistent way across the apps v1beta2 surface. By starting with immutable selectors, we are only reserving the right to defer implementation until we have stronger signals for the feature set that will provide the most value to users while ensuring the default behavior is
As @enisoc suggests, we can, and should, implement this without silently dropping the |
I'm mostly convinced by the arguments above. I agree that getting a strong signal on mutability would be important, and we can only pick immutability once. We need to communicate this change very widely if we do, and make sure no one in production is surprised. |
When performing a k8s upgrade from 1.18.1 to 1.19.13, the networking images will be upgraded as well. As part of this upgrade, the respective kubernetes spec templates which align with the new images will be used. An issue has been seen in which the rolling upgrade of the SR-IOV daemonsets fail because the latest templates specify a 'name' as the matchLabel selector. However, the older spec uses 'app' and 'tier' as the match selector. I believe that as of apps/v1, the selector label is still immutable for controllers such as daemonsets that have already been deployed. Some discussion on the topic can be found here: kubernetes/kubernetes#50808 For now, we'll just carry forward the 1.18 match selector. It's possible this can be fixed in later k8s releases. Story: 2008972 Closes-Bug: 1942351 Signed-off-by: Steven Webster <steven.webster@windriver.com> Change-Id: Id0ca32038dc2897879786a17f9794515457cd837
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the `commonAnnotations` directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the commonAnnotations directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/ Fixes overhangio#531.
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the commonAnnotations directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/ Fixes overhangio#531.
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the commonAnnotations directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/ Fixes #531.
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the commonAnnotations directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/ Fixes #531.
…n Tutor version changes Through the commonLabels directive in kustomization.yml, all resources get a label named "app.kubernetes.io/version", which is being set to the Tutor version at the time of initial deployment. When the user then subsequently progresses to a new Tutor version, Kubernetes attempts to update this label — but for Deployment, ReplicaSet, and DaemonSet resources, this is no longer allowed as of kubernetes/kubernetes#50808. This causes "tutor k8s start" (at the "kubectl apply --kustomize" step) to break with errors such as: Deployment.apps "redis" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"openedx-JIONBLbtByCGUYgHgr4tDWu1", "app.kubernetes.io/managed-by":"tutor", "app.kubernetes.io/name":"redis", "app.kubernetes.io/part-of":"openedx", "app.kubernetes.io/version":"12.1.7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable Simply removing the app.kubernetes.io/version label from kustomization.yml will permanently fix this issue for newly created Kubernetes deployments, which will "survive" any future Tutor version changes thereafter. However, *existing* production Open edX deployments will need to throw the affected Deployments away, and re-create them. Also, add the Tutor version as a resource annotation instead, using the commonAnnotations directive. See also: kubernetes/client-go#508 https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonlabels/ https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/commonannotations/ Fixes #531.
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
We now recommend our users to have dedicated Our users have existing Deployments with a single "application" label selector and later realize they need to add the "component" label. It would be nice to have a feature toggle to disable selector immutability. Then cluster operators may have a webhook to perform validation and allow mutability under certain conditions. |
Background
As part of the greater controller GA effort, we have removed defaulting
spec.selector
tospec.template.labels
values inapps/v1beta2
as the defaulting operation is semantically broken. In this issue, we propose to makespec.selector
immutable after API object creation.Motivation
Immutability resolves some challenging issues that plague many teams for over a year e.g., #34292, and the current behavior for controllers with respect to selector mutations is undefined e.g., #26202.
In contrast, by making selectors immutable, validation ensures the selectors always match template labels of created children. This provides a risk-free, deterministic behavior to resolve the issue, and meet the general goal of declarative configuration.
We could lift the immutability in future after all controller types have well-defined behaviors (i.e., after GA). This provides ability to address new use cases while maintaining API backward compatibility. However, if we do not make selectors immutable prior to GA, we will have to live with mutable selectors for the lifetime of the v1 API and the controllers’ undefined behaviors until reasonable semantics are decided and implemented.
/kind bug
/sig apps
/sig api-machinery
@bgrant0607 @erictune @kow3ns @enisoc @janetkuo @foxish @liyinan926
@liggitt @smarterclayton @Kargakis @mikedanese
The text was updated successfully, but these errors were encountered: