New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm 3 - upgrade nginx - spec.clusterIP: Invalid value: "": field is immutable #6378
Comments
|
This is likely related to a recent change in Helm 3 where it now uses a three-way merge patch strategy similar to kubectl. See #6124 If you can provide steps on how to reproduce this, that would be wonderful. Thanks! |
|
Sure! I created an AKS cluster. I create a public IP in the MC_* resource group. I stored the IP address of that public IP in $ip. Then basically ran that command twice: ./helm upgrade --install This is similar to what is done in https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip. The difference is that I do an helm upgrade --install twice. The purpose of that is to have a single command line (unconditional) in my CI/CD. Let me know if you need more detail to reproduce. |
|
Was that enough to reproduce? I can provide a bash script if it helps. |
|
Sorry, off at Helm Summit EU for the week so I haven't had time to respond yet. |
|
Ah... no worries. Enjoy the summit! |
|
I'm also experiencing this issue nginx-ingress chart installs fine on the first run, however on an upgrade... |
|
I believe this is an issue with the nginx-ingress chart, not helm3. By default, the chart will always try to pass link to sources: workaround: |
|
I've tried doing it, but I'm still getting the same error helm upgrade --install ingx stable/nginx-ingress -f ingx-values.yaml 1 ↵
client.go:357: Cannot patch Service: "ingx-nginx-ingress-controller" (Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
client.go:357: Cannot patch Service: "ingx-nginx-ingress-default-backend" (Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutableingx-values.yaml rbac:
create: true
controller:
service:
externalTrafficPolicy: Local
omitClusterIP: true
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 100
targetCPUUtilizationPercentage: "70"
targetMemoryUtilizationPercentage: "70"
defaultBackend:
service:
omitClusterIP: trueAs you can see below, template doesn't have clusterIP in it helm template ingx stable/nginx-ingress -f ingx-values.yaml ---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress-backend
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingx-nginx-ingress
subjects:
- kind: ServiceAccount
name: ingx-nginx-ingress
namespace: default
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingx-nginx-ingress
subjects:
- kind: ServiceAccount
name: ingx-nginx-ingress
namespace: default
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
externalTrafficPolicy: "Local"
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: "controller"
release: ingx
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "default-backend"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-default-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: "default-backend"
release: ingx
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
{}
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
component: "controller"
release: ingx
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1"
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=default/ingx-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/ingx-nginx-ingress-controller
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 33
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{}
hostNetwork: false
serviceAccountName: ingx-nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "default-backend"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-default-backend
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
component: "default-backend"
release: ingx
spec:
containers:
- name: nginx-ingress-default-backend
image: "k8s.gcr.io/defaultbackend-amd64:1.5"
imagePullPolicy: "IfNotPresent"
args:
securityContext:
runAsUser: 65534
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: ingx-nginx-ingress-backend
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/controller-hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: ingx-nginx-ingress-controller
minReplicas: 2
maxReplicas: 100
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 70
- type: Resource
resource:
name: memory
targetAverageUtilization: 70 |
|
I suspect that it happened because I deployed it originally without omitClusterIP parameters, and helm v3 is trying to do 3-way merge with original manifest, which does have helm get manifest ingx --revision 1 | grep "clusterIP"
clusterIP: ""
clusterIP: "" |
|
I was able to fix it by deleting existing chart first, and re-creating it with $ helm upgrade --install ingress-test stable/nginx-ingress --set controller.service.omitClusterIP=true --set defaultBackend.service.omitClusterIP=trueIt would be great if there was a way in helm v3 to skip the merge with existing manifest |
|
sorry, I should have specified that these values need to be set when the release is initially installed. Updating an existing release may prove trickier... |
|
I'm running into this problem with metric-server-2.8.8, which doesn't have any clusterIP in its values, and some other charts, with helm v3.0.0-rc.2. any advice? I'm not sure how to proceed. My issue seems to be with helmfile v0.95.0. I'll pursue it there :) |
|
I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine |
|
@johannges, I was just about to post the same. 👍 |
|
setting |
|
I think this is issue with helm with |
|
Actually it was a mistake on my end, omitting the clusterIP definition on the initialisation of the service (or the chart) works fine 👍 |
|
I was encountering this error as well for existing deployed kafka and redis chart releases. Removing Now I'm getting a new error from the redis release: Agreed with @bacongobbler that this looks related to the Helm v3 three-way merge patch strategy that's likely resulting in passing in fields (even with the same values as before) to the update/patch that Kubernetes considers immutable/unchangeable after first creation. |
|
In case anyone ends up here using helm v3 via terraform, since you cant directly tell it not to use edit: the whole error: ( "nginx-ingress-singleton-controller" is the release name i set. it has no specific meaning ) |
|
@zen4ever nailed the issue in #6378 (comment). I'll try to explain it in more detail.... As others have pointed out, the issue arises when a chart defines a clusterIP with an empty string. When the Service is installed, Kubernetes populates this field with the clusterIP it assigned to the Service. When This happens because of the following behaviour:
When generating the three-way patch, it sees that the old state was In Helm 2, it ignored the live state, so it saw no change (old state: My recommendation would be to change the template output. If no e.g. in the case of Should be changed to: This is also why TL;DR don't do this in your Service templates: Otherwise, Helm will try to change the service's clusterIP from an auto-generated IP address to the empty string, hence the error message. Hope this helps! |
|
@bacongobbler I've got to this thread after checking #7956 As with all previous commenters: we don't have "clusterIP" in templates at all, but error is still present with latest Helm if --force flag is used. Helm version: 3.4.1 "helm -n kube-system get manifest CHART_NAME | grep clusterIP" shows no results. Error: |
|
The same explanation provided in #6378 (comment) also applies here in your case @nick4fake. The difference being that with |
|
@bacongobbler I am really sorry if I miss something here, maybe I simply don't know enough about Helm internals. "My recommendation would be to change the template output. If no clusterIP is being provided as a value, then don't set the value to an empty string... Omit the field entirely." So what is the solution? Does that mean that "--force" flag can't be used at all if clusterIP field is not set to some static value? |
|
As far as Kubernetes is concerned: yes. |
|
According to my understanding this a problem with Kubernetes, because "forcefully overwriting" does not behave the same way as "deleting and recreating again". Is there any upstream bug? On the other hand, Helm is also misleading, because Anyway, Helm could implement a fallback workaround for such type of issues. If the current behavior (described as |
k8s-topgun failed with `spec.clusterIP: Invalid value: "": field is immutable` We don't have to use `--force` for the failed test. See in detail: helm/helm#6378 (comment) Signed-off-by: Rui Yang <ryang@pivotal.io>
When clusterIP is provided as an empty string to Helm's three-way patch mechanism during an upgrade, Helm sees the Kubernetes auto-assigned value in the cluster but thinks the desired value is "" (empty string), as that's how it's been rendered by the chart. As clusterIP is an immutable value, Kubernetes rejects Helm's erroneous change and aborts the upgrade. See helm/helm#6378 (comment) for a more detailed explanation. Note that this does not affect Helm 2 as Helm 2's patching ignored the current state of the Kubernetes cluster (and therefore only saw a before value of empty string and a desired value of empty string, thus not generating any patch). Only Helm 3 is impacted due to Helm 3's three-way patching considering current cluster state as well. This commit comments out the empty-string default value from values.yaml and changes the Service template to only include clusterIP if it has been explicitly declared with a proper value. Otherwise, its omission enables Kubernetes to continue to auto-assign an IP address upon installation and avoids an erroneous patch from being generated upon upgrades.
When clusterIP is provided as an empty string to Helm's three-way patch mechanism during an upgrade, Helm sees the Kubernetes auto-assigned value in the cluster but thinks the desired value is "" (empty string), as that's how it's been rendered by the chart. As clusterIP is an immutable value, Kubernetes rejects Helm's erroneous change and aborts the upgrade. See helm/helm#6378 (comment) for a more detailed explanation. Note that this does not affect Helm 2 as Helm 2's patching ignored the current state of the Kubernetes cluster (and therefore only saw a before value of empty string and a desired value of empty string, thus not generating any patch). Only Helm 3 is impacted due to Helm 3's three-way patching considering current cluster state as well. This commit adds an if-block around the clusterIP definition in the Service template to ensure it is only included in the rendered output if it has been explicitly provided (and not just if it is set to empty string). Otherwise, its omission enables Kubernetes to continue to auto-assign an IP address upon installation and avoids an erroneous patch from being generated upon upgrades.
…ually ClusterIP Work around helm/helm#6378, specifically see helm/helm#6378 (comment)
…ually ClusterIP Work around helm/helm#6378, specifically see helm/helm#6378 (comment) Signed-off-by: Zane van Iperen <z.vaniperen@uq.edu.au>
…ually ClusterIP Work around helm/helm#6378, specifically see helm/helm#6378 (comment) Signed-off-by: Zane van Iperen <z.vaniperen@uq.edu.au>
|
Hey all, please be aware that the related issue in Kubernetes was accepted as a bug FYI @bacongobbler |
|
Thanks for the heads up. |
|
It looks like a fix has been merged into Kubernetes core and will be available in 1.22. If anyone has time to test out this new functionality with a fresh build of Kubernetes, please feel free to share your findings here as well as in the upstream thread. |
|
We're one 1.25 and Helm 3.12 and still seeing this same issue. Is there anything that needs to be done on the Helm side to fix this issue? |
I use the following to install / upgrade a chart:
./helm upgrade --install
--set rbac.create=false
--set controller.replicaCount=2
--set controller.service.loadBalancerIP=$ip
--wait main-ingress stable/nginx-ingress
(Where $ip is an IP, e.g. 10.0.0.1)
That's done in a CI/CD pipeline, so the idea is to install the first time, upgrade next times.
It installs fine. At the second run, it outputs the following:
client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-controller" (Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-default-backend" (Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable
I also get this on helm list:
NAME NAMESPACE REVISION UPDATED STATUS CHART
main-ingress default 1 2019-09-06 13:17:33.8463781 -0400 EDT deployed nginx-ingress-1.18.0
main-ingress default 2 2019-09-06 13:21:11.6428945 -0400 EDT failed nginx-ingress-1.18.0
So, the release has failed.
I didn't have that problem with Helm 2. Is it due to a change of behaviour in helm 3 or is it a bug? If it's the former, how could I change the command not to have that problem?
Output of
helm version: version.BuildInfo{Version:"v3.0.0-beta.2", GitCommit:"26c7338408f8db593f93cd7c963ad56f67f662d4", GitTreeState:"clean", GoVersion:"go1.12.9"}Output of
kubectl version: Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
The text was updated successfully, but these errors were encountered: