Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm 3 - upgrade nginx - spec.clusterIP: Invalid value: "": field is immutable #6378

Closed
vplauzon opened this issue Sep 6, 2019 · 71 comments
Closed
Labels
v3.x Issues and Pull Requests related to the major version v3

Comments

@vplauzon
Copy link

vplauzon commented Sep 6, 2019

I use the following to install / upgrade a chart:

./helm upgrade --install
--set rbac.create=false
--set controller.replicaCount=2
--set controller.service.loadBalancerIP=$ip
--wait main-ingress stable/nginx-ingress

(Where $ip is an IP, e.g. 10.0.0.1)

That's done in a CI/CD pipeline, so the idea is to install the first time, upgrade next times.

It installs fine. At the second run, it outputs the following:

client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-controller" (Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-default-backend" (Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable

I also get this on helm list:

NAME NAMESPACE REVISION UPDATED STATUS CHART
main-ingress default 1 2019-09-06 13:17:33.8463781 -0400 EDT deployed nginx-ingress-1.18.0
main-ingress default 2 2019-09-06 13:21:11.6428945 -0400 EDT failed nginx-ingress-1.18.0

So, the release has failed.

I didn't have that problem with Helm 2. Is it due to a change of behaviour in helm 3 or is it a bug? If it's the former, how could I change the command not to have that problem?

Output of helm version: version.BuildInfo{Version:"v3.0.0-beta.2", GitCommit:"26c7338408f8db593f93cd7c963ad56f67f662d4", GitTreeState:"clean", GoVersion:"go1.12.9"}

Output of kubectl version: Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS

@bacongobbler bacongobbler added the v3.x Issues and Pull Requests related to the major version v3 label Sep 6, 2019
@bacongobbler
Copy link
Member

This is likely related to a recent change in Helm 3 where it now uses a three-way merge patch strategy similar to kubectl. See #6124

If you can provide steps on how to reproduce this, that would be wonderful. Thanks!

@bacongobbler bacongobbler added this to the 3.0.0-beta.4 milestone Sep 6, 2019
@vplauzon
Copy link
Author

vplauzon commented Sep 6, 2019

Sure!

I created an AKS cluster.

I create a public IP in the MC_* resource group.

I stored the IP address of that public IP in $ip.

Then basically ran that command twice:

./helm upgrade --install
--set rbac.create=false
--set controller.replicaCount=2
--set controller.service.loadBalancerIP=$ip
--wait main-ingress stable/nginx-ingress

This is similar to what is done in https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip.

The difference is that I do an helm upgrade --install twice. The purpose of that is to have a single command line (unconditional) in my CI/CD.

Let me know if you need more detail to reproduce.

@vplauzon
Copy link
Author

Was that enough to reproduce? I can provide a bash script if it helps.

@bacongobbler
Copy link
Member

Sorry, off at Helm Summit EU for the week so I haven't had time to respond yet.

@vplauzon
Copy link
Author

Ah... no worries. Enjoy the summit!

@bambash
Copy link

bambash commented Sep 17, 2019

I'm also experiencing this issue

$ helm version --short
v3.0.0-beta.3+g5cb923e

nginx-ingress chart installs fine on the first run, however on an upgrade...

$ helm upgrade --install first-chart stable/nginx-ingress --namespace infra
client.go:357: Cannot patch Service: "first-chart-nginx-ingress-controller" (Service "first-chart-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
client.go:357: Cannot patch Service: "first-chart-nginx-ingress-default-backend" (Service "first-chart-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "first-chart-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "first-chart-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable
$ helm ls -n infra
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART               
first-chart     infra           1               2019-09-17 16:15:25.513997106 -0500 CDT deployed        nginx-ingress-1.20.0
first-chart     infra           2               2019-09-17 16:15:30.845249671 -0500 CDT failed          nginx-ingress-1.20.0

@bambash
Copy link

bambash commented Sep 18, 2019

I believe this is an issue with the nginx-ingress chart, not helm3. By default, the chart will always try to pass controller.service.clusterIP = "" and defaultBackend.service.clusterIP = "" unless you set controller.service.omitClusterIP=true and defaultBackend.service.omitClusterIP=true.

link to sources:
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L321
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml#L22

workaround:

$ helm upgrade --install ingress-test stable/nginx-ingress --set controller.service.omitClusterIP=true --set defaultBackend.service.omitClusterIP=true

@zen4ever
Copy link

I've tried doing it, but I'm still getting the same error

helm upgrade --install ingx stable/nginx-ingress -f ingx-values.yaml                                             1 ↵
client.go:357: Cannot patch Service: "ingx-nginx-ingress-controller" (Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
client.go:357: Cannot patch Service: "ingx-nginx-ingress-default-backend" (Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable

ingx-values.yaml

rbac:
  create: true
controller:
  service:
    externalTrafficPolicy: Local
    omitClusterIP: true
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 100
    targetCPUUtilizationPercentage: "70"
    targetMemoryUtilizationPercentage: "70"
defaultBackend:
  service:
    omitClusterIP: true

As you can see below, template doesn't have clusterIP in it

helm template ingx stable/nginx-ingress -f ingx-values.yaml

---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-backend
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - "networking.k8s.io" # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - "networking.k8s.io" # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingx-nginx-ingress
subjects:
  - kind: ServiceAccount
    name: ingx-nginx-ingress
    namespace: default
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - "networking.k8s.io" # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - "networking.k8s.io" # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - create
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingx-nginx-ingress
subjects:
  - kind: ServiceAccount
    name: ingx-nginx-ingress
    namespace: default
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    component: "controller"
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-controller
spec:
  externalTrafficPolicy: "Local"
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app: nginx-ingress
    component: "controller"
    release: ingx
  type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    component: "default-backend"
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-default-backend
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
  selector:
    app: nginx-ingress
    component: "default-backend"
    release: ingx
  type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    component: "controller"
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-controller
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    {}
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app: nginx-ingress
        component: "controller"
        release: ingx
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: nginx-ingress-controller
          image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - /nginx-ingress-controller
            - --default-backend-service=default/ingx-nginx-ingress-default-backend
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=default/ingx-nginx-ingress-controller
          securityContext:
            capabilities:
                drop:
                - ALL
                add:
                - NET_BIND_SERVICE
            runAsUser: 33
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          resources:
            {}
      hostNetwork: false
      serviceAccountName: ingx-nginx-ingress
      terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    component: "default-backend"
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-default-backend
spec:
  replicas: 1
  revisionHistoryLimit: 10
  template:
    metadata:
      labels:
        app: nginx-ingress
        component: "default-backend"
        release: ingx
    spec:
      containers:
        - name: nginx-ingress-default-backend
          image: "k8s.gcr.io/defaultbackend-amd64:1.5"
          imagePullPolicy: "IfNotPresent"
          args:
          securityContext:
            runAsUser: 65534
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 0
            periodSeconds: 5
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          resources:
            {}
      serviceAccountName: ingx-nginx-ingress-backend
      terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/controller-hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.20.0
    component: "controller"
    heritage: Helm
    release: ingx
  name: ingx-nginx-ingress-controller
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: ingx-nginx-ingress-controller
  minReplicas: 2
  maxReplicas: 100
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 70
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: 70

@zen4ever
Copy link

zen4ever commented Sep 18, 2019

I suspect that it happened because I deployed it originally without omitClusterIP parameters, and helm v3 is trying to do 3-way merge with original manifest, which does have clusterIP: "" in it

helm get manifest ingx --revision 1 | grep "clusterIP"
  clusterIP: ""
  clusterIP: ""

@zen4ever
Copy link

I was able to fix it by deleting existing chart first, and re-creating it with omitClusterIP options. Bottom line, suggested workaround from @bambash will work only if you install chart with those options set to true from the start

$ helm upgrade --install ingress-test stable/nginx-ingress --set controller.service.omitClusterIP=true --set defaultBackend.service.omitClusterIP=true

It would be great if there was a way in helm v3 to skip the merge with existing manifest

@bambash
Copy link

bambash commented Sep 18, 2019

sorry, I should have specified that these values need to be set when the release is initially installed. Updating an existing release may prove trickier...

@yurrriq
Copy link
Contributor

yurrriq commented Nov 5, 2019

I'm running into this problem with metric-server-2.8.8, which doesn't have any clusterIP in its values, and some other charts, with helm v3.0.0-rc.2. any advice? I'm not sure how to proceed.

My issue seems to be with helmfile v0.95.0. I'll pursue it there :)

@johannges
Copy link

I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine

@yurrriq
Copy link
Contributor

yurrriq commented Nov 6, 2019

@johannges, I was just about to post the same. 👍

@mvisonneau
Copy link

setting omitClusterIP: true seems to work for defaultBackend and controller services but not the metrics one.

@nathanaelytj
Copy link

I think this is issue with helm with --force option during upgrade.
Helm is trying to recreate service but it also replace spec.clusterIP so it throw error.
I can confirm this using my own custom chart.
Error: UPGRADE FAILED: failed to replace object: Service "litespeed" is invalid: spec.clusterIP: Invalid value: "": field is immutable

@mvisonneau
Copy link

Actually it was a mistake on my end, omitting the clusterIP definition on the initialisation of the service (or the chart) works fine 👍

@iAnomaly
Copy link

I was encountering this error as well for existing deployed kafka and redis chart releases. Removing --force did indeed resolve this.

Now I'm getting a new error from the redis release:
Error: UPGRADE FAILED: release redis failed, and has been rolled back due to atomic being set: cannot patch "redis-master" with kind StatefulSet: StatefulSet.apps "redis-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

Agreed with @bacongobbler that this looks related to the Helm v3 three-way merge patch strategy that's likely resulting in passing in fields (even with the same values as before) to the update/patch that Kubernetes considers immutable/unchangeable after first creation.

@chris-brace
Copy link

chris-brace commented Nov 21, 2019

In case anyone ends up here using helm v3 via terraform, since you cant directly tell it not to use --force i had success manually deleting the chart using helm delete then re-running terraform. this sucks but it does work.

edit: the whole error: ( "nginx-ingress-singleton-controller" is the release name i set. it has no specific meaning )

Error: cannot patch "nginx-ingress-singleton-controller" with kind Service: Service "nginx-ingress-singleton-controller" is invalid: spec.clusterIP: Invalid value:
"": field is immutable && cannot patch "nginx-ingress-singleton-default-backend" with kind Service: Service "nginx-ingress-singleton-default-backend" is invalid: sp
ec.clusterIP: Invalid value: "": field is immutable

  on .terraform/modules/app_dev/nginx-ingress.tf line 1, in resource "helm_release" "nginx_ingress":
   1: resource "helm_release" "nginx_ingress" {

@bacongobbler
Copy link
Member

bacongobbler commented Nov 23, 2019

@zen4ever nailed the issue in #6378 (comment). I'll try to explain it in more detail....

As others have pointed out, the issue arises when a chart defines a clusterIP with an empty string. When the Service is installed, Kubernetes populates this field with the clusterIP it assigned to the Service.

When helm upgrade is invoked, the chart asked for the clusterIP to be removed, hence why the error message is spec.clusterIP: Invalid value: "": field is immutable.

This happens because of the following behaviour:

  1. On install, the chart specified it wanted the clusterIP to be an empty string
  2. Kubernetes auto-assigned the Service a clusterIP. We'll use 172.17.0.1 for this example
  3. On helm upgrade, the chart wants the clusterIP to be an empty string (or in @zen4ever's case above, it is omitted)

When generating the three-way patch, it sees that the old state was "", live state is currently at "172.17.0.1", and proposed state is "". Helm detected that the user requested to change the clusterIP from "172.17.0.1" to "", so it supplied a patch.

In Helm 2, it ignored the live state, so it saw no change (old state: clusterIP: "" to new state: clusterIP: ""), and no patch was generated, bypassing this behaviour.

My recommendation would be to change the template output. If no clusterIP is being provided as a value, then don't set the value to an empty string... Omit the field entirely.

e.g. in the case of stable/nginx-ingress:

spec:
{{- if not .Values.controller.service.omitClusterIP }}
  clusterIP: "{{ .Values.controller.service.clusterIP }}"
{{- end }}

Should be changed to:

spec:
{{- if not .Values.controller.service.omitClusterIP }}
  {{ with .Values.controller.service.clusterIP }}clusterIP: {{ quote . }}{{ end }}
{{- end }}

This is also why --set controller.service.omitClusterIP=true works in this case.

TL;DR don't do this in your Service templates:

clusterIP: ""

Otherwise, Helm will try to change the service's clusterIP from an auto-generated IP address to the empty string, hence the error message.

Hope this helps!

@nick4fake
Copy link

nick4fake commented Nov 28, 2020

@bacongobbler I've got to this thread after checking #7956

As with all previous commenters: we don't have "clusterIP" in templates at all, but error is still present with latest Helm if --force flag is used.

Helm version: 3.4.1

"helm -n kube-system get manifest CHART_NAME | grep clusterIP" shows no results.

Error:

field is immutable && failed to replace object: Service "SERVICE_NAME" is invalid: spec.clusterIP: Invalid value: "": field is immutable

@bacongobbler
Copy link
Member

bacongobbler commented Nov 28, 2020

The same explanation provided in #6378 (comment) also applies here in your case @nick4fake. The difference being that with --force, you are asking Kubernetes to take your fully-rendered manifest and forcefully overwrite the current live object. Since your manifest does not contain a clusterIP field, Kubernetes takes that and assumes you are trying to remove the clusterIP field from the live object, hence the error Invalid value: "": field is immutable .

@nick4fake
Copy link

@bacongobbler I am really sorry if I miss something here, maybe I simply don't know enough about Helm internals.

"My recommendation would be to change the template output. If no clusterIP is being provided as a value, then don't set the value to an empty string... Omit the field entirely."

So what is the solution? Does that mean that "--force" flag can't be used at all if clusterIP field is not set to some static value?

@bacongobbler
Copy link
Member

As far as Kubernetes is concerned: yes.

@gw0
Copy link

gw0 commented Nov 30, 2020

According to my understanding this a problem with Kubernetes, because "forcefully overwriting" does not behave the same way as "deleting and recreating again". Is there any upstream bug?

On the other hand, Helm is also misleading, because --force is described as "force resource updates through a replacement strategy". While in reality it does not do any replacement, it just attempts to forcefully overwrite resources (it would be better to name the flag --force-overwrite). Forceful replacement would look like deleting and recreating again (there could be a flag --force-recreate). Of course, --force-recreate could be a bit dangerous to use for some resources, but it would always succeed.

Anyway, Helm could implement a fallback workaround for such type of issues. If the current behavior (described as --force-overwrite) fails and detects an immutable field error, it should delete and recreate the resource (as --force-recreate).

xtremerui pushed a commit to concourse/concourse that referenced this issue Jan 19, 2021
k8s-topgun failed with `spec.clusterIP: Invalid value: "": field is immutable`

We don't have to use `--force` for the failed test. See in detail:

helm/helm#6378 (comment)

Signed-off-by: Rui Yang <ryang@pivotal.io>
rmmorrison added a commit to rmmorrison/mockserver that referenced this issue Apr 1, 2021
When clusterIP is provided as an empty string to Helm's three-way patch mechanism during an upgrade, Helm sees the Kubernetes auto-assigned value in the cluster but thinks the desired value is "" (empty string), as that's how it's been rendered by the chart. As clusterIP is an immutable value, Kubernetes rejects Helm's erroneous change and aborts the upgrade.

See helm/helm#6378 (comment) for a more detailed explanation. Note that this does not affect Helm 2 as Helm 2's patching ignored the current state of the Kubernetes cluster (and therefore only saw a before value of empty string and a desired value of empty string, thus not generating any patch). Only Helm 3 is impacted due to Helm 3's three-way patching considering current cluster state as well.

This commit comments out the empty-string default value from values.yaml and changes the Service template to only include clusterIP if it has been explicitly declared with a proper value. Otherwise, its omission enables Kubernetes to continue to auto-assign an IP address upon installation and avoids an erroneous patch from being generated upon upgrades.
rmmorrison added a commit to rmmorrison/mockserver that referenced this issue Apr 1, 2021
When clusterIP is provided as an empty string to Helm's three-way patch mechanism during an upgrade, Helm sees the Kubernetes auto-assigned value in the cluster but thinks the desired value is "" (empty string), as that's how it's been rendered by the chart. As clusterIP is an immutable value, Kubernetes rejects Helm's erroneous change and aborts the upgrade.

See helm/helm#6378 (comment) for a more detailed explanation. Note that this does not affect Helm 2 as Helm 2's patching ignored the current state of the Kubernetes cluster (and therefore only saw a before value of empty string and a desired value of empty string, thus not generating any patch). Only Helm 3 is impacted due to Helm 3's three-way patching considering current cluster state as well.

This commit adds an if-block around the clusterIP definition in the Service template to ensure it is only included in the rendered output if it has been explicitly provided (and not just if it is set to empty string). Otherwise, its omission enables Kubernetes to continue to auto-assign an IP address upon installation and avoids an erroneous patch from being generated upon upgrades.
vs49688 added a commit to Australian-Imaging-Service/charts that referenced this issue Apr 21, 2021
vs49688 added a commit to Australian-Imaging-Service/charts that referenced this issue Apr 21, 2021
…ually ClusterIP

Work around helm/helm#6378, specifically see
helm/helm#6378 (comment)

Signed-off-by: Zane van Iperen <z.vaniperen@uq.edu.au>
vs49688 added a commit to Australian-Imaging-Service/charts that referenced this issue Apr 21, 2021
…ually ClusterIP

Work around helm/helm#6378, specifically see
helm/helm#6378 (comment)

Signed-off-by: Zane van Iperen <z.vaniperen@uq.edu.au>
@sjmiller609
Copy link

Hey all, please be aware that the related issue in Kubernetes was accepted as a bug

FYI @bacongobbler

kubernetes/kubernetes#91459

@bacongobbler
Copy link
Member

Thanks for the heads up.

@bacongobbler
Copy link
Member

It looks like a fix has been merged into Kubernetes core and will be available in 1.22. If anyone has time to test out this new functionality with a fresh build of Kubernetes, please feel free to share your findings here as well as in the upstream thread.

@kustodian
Copy link

We're one 1.25 and Helm 3.12 and still seeing this same issue. Is there anything that needs to be done on the Helm side to fix this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
v3.x Issues and Pull Requests related to the major version v3
Projects
None yet
Development

Successfully merging a pull request may close this issue.