-
Notifications
You must be signed in to change notification settings - Fork 39.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl apply deployment -f
doesn't accept label/selector changes
#26202
Comments
To reproduce: First create a Deployment via apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
test: abcd
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80 Then remove any of the labels in the file (and optionally specify the selector), and then apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80 Error message: proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no encoder for InitContainers []v1.Container [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
The Deployment "nginx-deployment" is invalid.
spec.template.metadata.labels: Invalid value: {"app":"nginx"}: `selector` does not match template `labels` @kubernetes/kubectl @adohe |
Is there a bug already open for all that proto crap? I had not seen that On May 24, 2016, at 6:29 PM, Janet Kuo notifications@github.com wrote: To reproduce: First create a Deployment via apply -f apiVersion: extensions/v1beta1kind: Deploymentmetadata: Then remove any of the labels in the file, and apply -f $ kubectl apply -f docs/user-guide/nginx-deployment.yaml proto: no @kubernetes/kubectl https://github.com/orgs/kubernetes/teams/kubectl — |
The problem is that the
|
That's because the |
@janetkuo I am considering adding the default |
We've been bitten by this for a service:
I just nuked the |
And before that error, there was the very same sequence of proto errors. |
Please note if you want to get defaulted stuff, it's super important to let the server do the defaulting. |
@lavalamp yes, we should put this in server side. |
@bgrant0607 @janetkuo @lavalamp I thought about this carefully, I think the root cause of both this issue and #24198 is |
Changing selectors has other problems, also. For instance, it's a map, and maps are merged by default. Also note that a default selector won't be changed automatically when pod template labels are changed. |
I wrote a brief explanation here: #15894 (comment) |
Ran into this myself. In the meantime is there a work around that doesn't require blowing away |
Accidental orphaning is another problem: #24888 |
@metral What problem did you encounter? |
@lavalamp We capture the last-applied-configuration prior to defaulting, so it shouldn't matter whether the defaulting happens in the client or server, though obviously I'd prefer the latter. |
Controlled pods are identified by labels, not by name. In general, changing the labels and selector on a controller isn't expected to work. To do it properly requires a multi-step dance. In any case, controllers don't update running pods -- for the most part, not even Deployment does that. We should strongly recommend that users shouldn't do that, unless they use |
@bgrant0607 I'm hitting the same issue as @janetkuo described when I try changing a label. In my case it's a label I use for the revision/commit of the current build which gets altered in the deployment manifest upon a successful build so that I can apply the new deployment. Oddly enough, this same strategy is working on 2 other projects / deployments that I have in the system just fine, but for some reason on my 3rd one I'm hitting this issue. I'm sure it's something on my end that I haven't caught onto, but strange nevertheless that updating the rev & applying the new deployment works for some and not others |
@metral Specify a selector that omits the label you plan to change. |
@bgrant0607 that makes sense - I'll give that a shot when I'm back at my computer. Just odd that I've never specified the selector in my working deployments, just the labels in the metadata and changing the revs on those has never caused an issue until now with my new projects deployment |
I think the longer-term solution will be to move to a defaulting approach similar to what we use for Job: |
Why do we want this immutable ? |
@ltupin because it's not safe to mutate a controller's selector. For example, deployment finds its children resources (replicasets) by label/selector. Changing a deployment's selector is dangerous. It could cause the deployment to be unable to find its existing children and therefore cause disruptions to your running workloads. We don't want to encourage that. |
still reproducible in apps/v1beta2 apiVersion: apps/v1beta2 Or maybe I did not get @liggitt - what exactly I should do with the fields.. |
Mutation of label selectors on deployments, replicasets, and daemonsets is not allowed in apps/v1beta2 and forward |
See #50808 for details |
In certain situations (see details and repro below), the deployment to Kubernetes fails with "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit adds explicit `selector` values for all Deployment objects. It also bumps the K8S API from the now deprecated `extensions/v1beta1` version to the current stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any further issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however fix the damage done by the defaulted Deployment selectors in existing deployments of the app (see below for an explanation of the issue). Selectors are immutable and will therefore retain their current defaulted value. Which means that you'll still run into the error above. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: ``` app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true ``` So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: ``` The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` ``` This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if your're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any further issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments of the app. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if your're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any further issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments of the app. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if your're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any further issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments of the app. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. |**The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if your're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
All i'm seeing is k8s members saying this is "discouraged". What if i have to do this? In production? I have to allocate downtime to destroy the deployment and replace with a new one? |
faced this issue in kubeflow/kubeflow#4184 Please help |
I encountered the same problem when rollback the chart from the selector manully added chart to no selector chart: |
Can you post the yaml file you are trying to apply for?
…On Fri, Nov 20, 2020 at 1:37 PM huimin ***@***.***> wrote:
I encountered the same problem when rollback the chart from the selector
manully added chart to no selector chart:
stderr: 'Error: Failed to recreate resource: DaemonSet.apps
"internal-management-ingress" is invalid: spec.template.metadata.labels:
Invalid value: map[string]string{"app":"internal-management-ingress",
"chart":"icp-management-ingress",
"component":"internal-management-ingress", "heritage":"Tiller",
"k8s-app":"internal-management-ingress",
"release":"internal-management-ingress"}:selectordoes not match template
labels'
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#26202 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABRUTWXRIA5GVUUH3ZNCVULSQYPTNANCNFSM4CEXNUVQ>
.
|
@abhinav054 This is helm rollback process:
|
Do you have any app with the given template label running.
…On Fri, 20 Nov 2020, 14:13 huimin, ***@***.***> wrote:
@abhinav054 <https://github.com/Abhinav054> This is helm rollback process:
helm rollback --debug --tls --force internal-management-ingress 2
[debug] Created tunnel using local port: '36208'
[debug] SERVER: "127.0.0.1:36208"
[debug] Host="", Key="/root/.helm/key.pem", Cert="/root/.helm/cert.pem", CA="/root/.helm/ca.pem"
Error: failed to create resource: DaemonSet.apps "internal-management-ingress" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"internal-management-ingress", "chart":"icp-management-ingress", "component":"internal-management-ingress", "heritage":"Tiller", "k8s-app":"internal-management-ingress", "release":"internal-management-ingress"}: `selector` does not match template `labels`
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#26202 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABRUTWUFYS7EXN7HSMNCSIDSQYT4RANCNFSM4CEXNUVQ>
.
|
So far the answer seems to be a tentative yes? |
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
In certain situations (see details below), the deployment to Kubernetes fails with: > "The Deployment [DEPLOYMENT_OBJECT] is invalid: [...] `selector` does not match template `labels`". This is caused by the K8S Deployment manifests missing an explicit `selector` value. This commit: * adds explicit `selector` values for all Deployment objects. * bumps the K8S API from the deprecated `extensions/v1beta1` version to the stable `apps/v1` version. This version made the `selector` property of the Deployment a required value, preventing any issues with missing selectors in the future. This change is backwards compatible with existing deployments of the microservices demo app. I.e. you should be able to pull this change and run `skaffold run` against an existing deployment of the app without issues. This will not however resolve the issue for existing deployments. Selectors are immutable and will therefore retain their current defaulted value. You should run `skaffold delete` followed by `skaffold run` after having pulled this change to do a clean re-deployment of the app, which will resolve the issue. **The nitty-gritty details** In the `extensions/v1beta1` version of K8S API (the version that was used by this project), the `selector` property of a Deployment object is optional and is defaulted to the labels used in the pod template. This can cause subtle issues leading to deployment failures. This project, where Deployment selectors were omitted, is a good example of what can go wrong with defaulted selectors. Consider this: 1. Run `skaffold run` to build locally with Docker and deploy. Since the Deployment specs don't have explict selectors, they will be defaulted to the pod template labels. And since skaffold adds additional labels to the pod template like `skaffold-builder` and `skaffold-deployer`, the end-result will be a selector that looks like this: > app=cartservice,cleanup=true,docker-api-version=1.39,skaffold-builder=local,skaffold-deployer=kubectl,skaffold-tag-policy=git-commit,tail=true So far, so good. 2. Now run `skaffold run -p gcb --default-repo=your-gcr-repo` to build on Google Cloud Build instead of building locally. This will blow up when attempting to deploy to Kubernetes with an error similar to: > The Deployment "cartservice" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"skaffold-builder":"google-cloud-build", "profiles"="gcb", "skaffold-deployer":"kubectl", "skaffold-tag-policy":"git-commit", "docker-api-version":"1.39", "tail":"true", "app":"cartservice", "cleanup":"true"}: `selector` does not match template `labels` (and the same error for every other deployment object) This is because the skaffold labels that were automatically added to the pod template have changed to include references to Google Cloud Build. That normally shouldn't be an issue. But without explicit Deployment selectors, this results in the defaulted selectors for our Deployment objects to have also changed. Which means that the new version of our Deployment objects are now managing different sets of Pods. Which is thankfully caught by kubectl before the deployment happens (otherwise this would have resulted in orphaned pods). In this commit, we explicitely set the `selector` value of all Deployment objects, which fixes this issue. We also bump the K8S API version to the stable `apps/v1`, which makes the `selector` property a required value and will avoid accidently forgetting selectors in the future. More details if you're curious: * Why defaulted Deployment selectors cause problems: kubernetes/kubernetes#26202 * Why Deployment selectors should be (and were made) immutable: kubernetes/kubernetes#50808
I got errors complaining about the selector and labels not matching even if I explicitly specified both. Showed @janetkuo.
The text was updated successfully, but these errors were encountered: