Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm and Kustomize Helm have different results #4905

Closed
shaardie opened this issue Dec 3, 2022 · 1 comment · Fixed by #5044 or #5342
Closed

Helm and Kustomize Helm have different results #4905

shaardie opened this issue Dec 3, 2022 · 1 comment · Fixed by #5044 or #5342
Labels
area/helm kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@shaardie
Copy link

shaardie commented Dec 3, 2022

What happened?

Running Helm directly and with Kustomize results in different output for the current Istio Gateway Helm Chart, which is quite confusing.

A short description with command how to reproduce, since I guess this is easier to understand:

❯ helm repo list | grep istio                                                                                                                 
istio                   https://istio-release.storage.googleapis.com/charts
❯ helm template --version 1.16.0 istio/gateway --set 'securityContext.test=123' | yq -y '.spec.template.spec.securityContext | select(. != null)'
test: 123

❯ cat kustomization.yaml
helmCharts:
    - name: gateway
      version: 1.16.0
      repo: https://istio-release.storage.googleapis.com/charts
      valuesInline:
          securityContext:
              test: 123
❯ kustomize build --enable-helm . | yq -y '.spec.template.spec.securityContext | select(. != null)'
sysctls:
  - name: net.ipv4.ip_unprivileged_port_start
    value: '0'

This is the actual helm template for this result https://github.com/istio/istio/blob/8f2e2dc5d57f6f1f7a453e03ec96ca72b2205783/manifests/charts/gateway/templates/deployment.yaml#L35

What did you expect to happen?

Running Helm or Kustomize with Helm enabled should give the same results on the same values set.

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
helmCharts:
    - name: gateway
      version: 1.16.0
      repo: https://istio-release.storage.googleapis.com/charts
      valuesInline:
          securityContext:
              test: 123

Expected output

---
# Source: gateway/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name
  namespace: default
  labels:
    helm.sh/chart: gateway-1.16.0
    app: release-name
    istio: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
---
# Source: gateway/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: release-name
  namespace: default
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]
---
# Source: gateway/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: release-name
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name
subjects:
- kind: ServiceAccount
  name: release-name
---
# Source: gateway/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name
  namespace: default
  labels:
    helm.sh/chart: gateway-1.16.0
    app: release-name
    istio: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
  annotations:
    {}
spec:
  type: LoadBalancer
  ports:
    - name: status-port
      port: 15021
      protocol: TCP
      targetPort: 15021
    - name: http2
      port: 80
      protocol: TCP
      targetPort: 80
    - name: https
      port: 443
      protocol: TCP
      targetPort: 443
  selector:
    app: release-name
    istio: release-name
---
# Source: gateway/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name
  namespace: default
  labels:
    helm.sh/chart: gateway-1.16.0
    app: release-name
    istio: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
  annotations:
    {}
spec:
  selector:
    matchLabels:
      app: release-name
      istio: release-name
  template:
    metadata:
      annotations:
        inject.istio.io/templates: gateway
        prometheus.io/path: /stats/prometheus
        prometheus.io/port: "15020"
        prometheus.io/scrape: "true"
        sidecar.istio.io/inject: "true"
      labels:
        sidecar.istio.io/inject: "true"
        app: release-name
        istio: release-name
    spec:
      serviceAccountName: release-name
      securityContext:
        test: 123
      containers:
        - name: istio-proxy
          # "auto" will be populated at runtime by the mutating webhook. See https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#customizing-injection
          image: auto
          securityContext:
            # Safe since 1.22: https://github.com/kubernetes/kubernetes/pull/103326
            capabilities:
              drop:
              - ALL
            allowPrivilegeEscalation: false
            privileged: false
            readOnlyRootFilesystem: true
            runAsUser: 1337
            runAsGroup: 1337
            runAsNonRoot: true
          env:
          ports:
          - containerPort: 15090
            protocol: TCP
            name: http-envoy-prom
          resources:
            limits:
              cpu: 2000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
# Source: gateway/templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: release-name
  namespace: default
  labels:
    helm.sh/chart: gateway-1.16.0
    app: release-name
    istio: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
  annotations:
    {}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: release-name
  minReplicas: 1
  maxReplicas: 5
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          averageUtilization: 80
          type: Utilization

Actual output

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: gateway-1.16.0
    istio: release-name
  name: release-name
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: release-name
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - watch
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: release-name
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name
subjects:
- kind: ServiceAccount
  name: release-name
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: gateway-1.16.0
    istio: release-name
  name: release-name
  namespace: default
spec:
  ports:
  - name: status-port
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: http2
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: release-name
    istio: release-name
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: gateway-1.16.0
    istio: release-name
  name: release-name
  namespace: default
spec:
  selector:
    matchLabels:
      app: release-name
      istio: release-name
  template:
    metadata:
      annotations:
        inject.istio.io/templates: gateway
        prometheus.io/path: /stats/prometheus
        prometheus.io/port: "15020"
        prometheus.io/scrape: "true"
        sidecar.istio.io/inject: "true"
      labels:
        app: release-name
        istio: release-name
        sidecar.istio.io/inject: "true"
    spec:
      containers:
      - env: null
        image: auto
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        resources:
          limits:
            cpu: 2000m
            memory: 1024Mi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
      securityContext:
        sysctls:
        - name: net.ipv4.ip_unprivileged_port_start
          value: "0"
      serviceAccountName: release-name
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: release-name
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: gateway-1.16.0
    istio: release-name
  name: release-name
  namespace: default
spec:
  maxReplicas: 5
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: release-name

Kustomize version

{Version:4.5.7 GitCommit:$Format:%H$ BuildDate:2022-08-12T18:18:43Z GoOs:linux GoArch:amd64}

Operating system

Linux

@shaardie shaardie added the kind/bug Categorizes issue or PR as related to a bug. label Dec 3, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 3, 2022
@KnVerey
Copy link
Contributor

KnVerey commented Dec 7, 2022

I am able to reproduce this. The problem seems to be related to how the custom values file gets merged with the valuesInline. If you provide a valuesFile--even if it is completely empty--the output is suddenly correct.

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
3 participants