Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INFRA-2743] add traefik as a secondary ingress controller #867

Merged
merged 4 commits into from
Feb 26, 2021
Merged

[INFRA-2743] add traefik as a secondary ingress controller #867

merged 4 commits into from
Feb 26, 2021

Conversation

jetersen
Copy link
Contributor

@jetersen jetersen commented Feb 19, 2021

This adds traefik currently there is no way to scrape data with prometheus.

I tried to map everything that the nginx controllers had.

For default backend, the option to add another container and add error frontend see the comments here: traefik/traefik#4218

Perhaps this should be subcharted to allow for a way to add custom middleware and ingressroute at the global level or better yet create a separate chart for those global things.

@jetersen jetersen requested a review from a team February 19, 2021 05:34
@halkeye
Copy link
Member

halkeye commented Feb 19, 2021

So I have the same question the last time an attempt was made. Why switch away from nginx that is working, to something that might work? Is there a problem that the nginx ingress isn't solving that traefik is? Is there a discussion somwhere?

Comment on lines 18 to 25
minVersion: VersionTLS12
cipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305"
- "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jetersen
Copy link
Contributor Author

jetersen commented Feb 19, 2021

@halkeye based on @olblak comment in https://issues.jenkins.io/browse/INFRA-2743

There are some benefits to traefik such as being able to support gRPC and web on the same https port.
This is currently not supported in nginx

Another benefit would be the ability to route TCP and UDP over the same loadbalancer without workarounds once kubernetes/kubernetes#94028 lands.

@olblak could hopefully attest to some of the reasons, I suspect ingressroute is one reason.

Nothing is preventing us from keeping nginx, we could choose to only enable the ingressroute for traefik if that is the feature that @olblak is after :)

Another thing that is superior in traefik is the ability to add endless middleware to both k8s ingress and ingressroute.

For my prod cluster another reason is the addition of https://github.com/traefik/mesh-helm-chart

Another plan that I have for my prod cluster is to setup VPN using wireguard have the http frontend and UDP port using the same traefik ingressroute on the same hostname 👏

Here is a working example with ingressroute for argocd and using cert-manager and external-dns.

where traefik is hostname is set using external dns:

traefik:
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: private-traefik.company.io
kind: Service
apiVersion: v1
metadata:
  name: argocd-external-name
  annotations:
    external-dns.alpha.kubernetes.io/hostname: argocd.company.io
spec:
  type: ExternalName
  externalName: private-traefik.company.io
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: argocd.company.io-tls
spec:
  dnsNames:
  - argocd.company.io
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: letsencrypt-prod
  secretName: argocd.company.io-tls
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: argocd-server
  namespace: argocd
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`argocd.company.io`)
      priority: 10
      services:
        - name: argocd-server
          port: 80
    - kind: Rule
      match: Host(`argocd.company.io`) && Headers(`Content-Type`, `application/grpc`)
      priority: 11
      services:
        - name: argocd-server
          port: 80
          scheme: h2c
  tls:
    secretName: argocd.company.io-tls

@jetersen
Copy link
Contributor Author

if anything the current use of stable/nginx-ingress should be switched to new chart: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

@halkeye halkeye changed the title add traefik [INFRA-2743] add traefik as a secondary ingress controller Feb 19, 2021
@halkeye
Copy link
Member

halkeye commented Feb 19, 2021

  1. build is failing yamlint step

config/default/traefik.yaml
20:5 error wrong indentation: expected 6 but found 4 (indentation)

  1. "Then ensure that specific nginx annotation have an equivalent on Traefik"

Is that done? does any existing annotations need to be updated? do you have any checklists or anything?

@jetersen
Copy link
Contributor Author

@jetersen
Copy link
Contributor Author

better traefik support in oauth2-proxy was recently added: oauth2-proxy/oauth2-proxy#957

@olblak
Copy link
Member

olblak commented Feb 19, 2021

While I agree that Nginx does the work, Traefik offers more advanced configuration and more analytics. So I am definitely interested
to test it with production workload

@timja
Copy link
Member

timja commented Feb 19, 2021

better traefik support in oauth2-proxy was recently added: oauth2-proxy/oauth2-proxy#957

oh wow, this was a pain for us at work, great to know it's solved

Copy link
Contributor

@dduportal dduportal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution 👍

Could you check the feedback from this first pass of review?

config/default/traefik.yaml Outdated Show resolved Hide resolved
config/default/traefik.yaml Show resolved Hide resolved
config/default/traefik.yaml Show resolved Hide resolved
config/default/traefik.yaml Outdated Show resolved Hide resolved
@dduportal
Copy link
Contributor

"Then ensure that specific nginx annotation have an equivalent on Traefik"

Is that done? does any existing annotations need to be updated? do you have any checklists or anything?

If we have specific nginx annotation, we have to be careful to understand what it is doing: even without switching to Traefik, it would be a valuable self-documentation task.

Also, note that we should ensure that the Traefik's CRD provider is enabled to ensure that the advanced/native features can be used (as the default Ingress Controller provider is very limited in term of features in Traefik). It should be the case by default

@dduportal
Copy link
Contributor

=> Please note that the helm diff is failing in the CI because the Traefik's Helm chart has never been run in our cluster and it complains about not knowing the Traefik's CRD annotation: https://doc.traefik.io/traefik/providers/kubernetes-crd/#resource-configuration.

We might need to do a separate PR to be applied first, with only the CRDs to install initially to bootstrap (or do it manually 1 time).

Co-authored-by: Damien Duportal <damien.duportal@gmail.com>
@jetersen
Copy link
Contributor Author

jetersen commented Feb 19, 2021

We could use the presync event to load the crds, see https://github.com/zakkg3/cert-manager-installer for a helmfile example of how to load the crds.

These guys used python to load the versioned CRD: https://github.com/cloudposse/helmfiles/tree/master/releases/cert-manager

@dduportal
Copy link
Contributor

Thanks for the changes @jetersen!

We could use the presync event to load the crds
The Traefik Helm Chart takes care of installing the CRDs for us (unless if you are using Helm v2: https://github.com/traefik/traefik-helm-chart#warning).

So the problem is not about installing CRDs, but bootstraping the initial env, as the helmfile hooks are not applied when the diff command is executed (because diff is a read-only operation). Exemple: https://github.com/cloudposse/helmfiles/blob/46e3c61b1f2ed2910cb258b89e040b35a3d2863f/deprecated/prometheus-operator.yaml#L50 .

Thanks for the idea though, it could have been a great use!

If it is OK for you, I'll check with @olblak and the rest of the team to preinstall the CRDs if it is OK, and we'll ping you here (+ relaunch the CI job that should pass the diff step).

helmfile.d/traefik.yaml Outdated Show resolved Hide resolved
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
@dduportal
Copy link
Contributor

Worklog:

@infra-ci-jenkins-io
Copy link

datadog, datadog-cluster-agent, Deployment (apps) has changed:
  # Source: datadog/templates/cluster-agent-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: datadog-cluster-agent
    labels:
      helm.sh/chart: "datadog-2.8.4"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    replicas: 1
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    selector:
      matchLabels:
        app: datadog-cluster-agent
    template:
      metadata:
        labels:
          app: datadog-cluster-agent
        name: datadog-cluster-agent
        annotations:
-         checksum/clusteragent_token: f1306dea70c4c63ee951ad4d62011c2783b67d9c0f8959e42f26ca4268e18f29
+         checksum/clusteragent_token: 3f55fb9e0410d45fe7f676b4195e136206fbca5075937301905e6ae4f882d1b7
          checksum/api_key: 349262a0e1e2f6c6a60697a8c0de6a60f6af17a0e7e5a7ddb9e9b2f89a834279
          checksum/application_key: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/install_info: b55738876453a263565890e43279c11a65d51c78063c51ebdbb09fef67dd6802
          ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
          ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
          ad.datadoghq.com/cluster-agent.instances: |
            [{
              "prometheus_url": "http://%%host%%:5000/metrics",
              "namespace": "datadog.cluster_agent",
              "metrics": [
                "go_goroutines", "go_memstats_*", "process_*",
                "api_requests",
                "datadog_requests", "external_metrics", "rate_limit_queries_*",
                "cluster_checks_*"
              ]
            }]
  
      spec:
        serviceAccountName: datadog-cluster-agent
        containers:
        - name: cluster-agent
          image: "gcr.io/datadoghq/cluster-agent:1.10.0"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          ports:
          - containerPort: 5005
            name: agentport
            protocol: TCP
          env:
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
                  optional: true
            - name: DD_CLUSTER_CHECKS_ENABLED
              value: "true"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "kube_endpoints kube_services"
            - name: DD_EXTRA_LISTENERS
              value: "kube_endpoints kube_services"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_LEADER_ELECTION
              value: "true"
            - name: DD_LEADER_LEASE_DURATION
              value: "60"
            - name: DD_COLLECT_KUBERNETES_EVENTS
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: datadog-cluster-agent
                  key: token
            - name: DD_KUBE_RESOURCES_NAMESPACE
              value: datadog
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
              value: "true"
            - name: DD_COMPLIANCE_CONFIG_ENABLED
              value:  "false"
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
        volumes:
          - name: installinfo
            configMap:
              name: datadog-installinfo
        nodeSelector:
          kubernetes.io/os: linux
datadog, datadog, DaemonSet (apps) has changed:
  # Source: datadog/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: datadog
    labels:
      helm.sh/chart: "datadog-2.8.4"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    selector:
      matchLabels:
        app: datadog
    template:
      metadata:
        labels:
          app: datadog
        name: datadog
        annotations:
-         checksum/clusteragent_token: 27f25d012a19fea207ff0355f2c108b97312be663ad93325565ec16748a95e60
+         checksum/clusteragent_token: d7409721d41815c64bc0d60002c0da1d88e6846853fa63afb203817d94429f70
          checksum/api_key: 349262a0e1e2f6c6a60697a8c0de6a60f6af17a0e7e5a7ddb9e9b2f89a834279
          checksum/install_info: b55738876453a263565890e43279c11a65d51c78063c51ebdbb09fef67dd6802
          checksum/autoconf-config: 74234e98afe7498fb5daf1f36ac2d78acc339464f950703b8c019892f982b90b
          checksum/confd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
          checksum/checksd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
      spec:
        containers:
        - name: agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["agent", "run"]
          resources:
            {}
          ports:
          - containerPort: 8125
            name: dogstatsdport
            protocol: UDP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_DOGSTATSD_PORT
              value: "8125"
            - name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_APM_ENABLED
              value: "false"
            - name: DD_LOGS_ENABLED
              value: "true"
            - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
              value: "false"
            - name: DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE
              value: "true"
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "clusterchecks endpointschecks"
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: pointerdir
              mountPath: /opt/datadog-agent/run
              mountPropagation: None
            - name: logpodpath
              mountPath: /var/log/pods
              mountPropagation: None
              readOnly: true
            - name: logdockercontainerpath
              mountPath: /var/lib/docker/containers
              mountPropagation: None
              readOnly: true
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
        - name: trace-agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["trace-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          ports:
          - containerPort: 8126
            hostPort: 8126
            name: traceport
            protocol: TCP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_APM_ENABLED
              value: "true"
            - name: DD_APM_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_APM_RECEIVER_PORT
              value: "8126"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          livenessProbe:
            initialDelaySeconds: 15
            periodSeconds: 15
            tcpSocket:
              port: 8126
            timeoutSeconds: 5
        - name: process-agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["process-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_PROCESS_AGENT_ENABLED
              value: "true"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_SYSTEM_PROBE_ENABLED
              value: "false"
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_CLUSTER_ID
              valueFrom:
                configMapKeyRef:
                  name: datadog-cluster-id
                  key: id
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: passwd
              mountPath: /etc/passwd
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
        initContainers:
            
        - name: init-volume
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - cp -r /etc/datadog-agent /opt
          volumeMounts:
            - name: config
              mountPath: /opt/datadog-agent
          resources:
            {}
        - name: init-config
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - for script in $(find /etc/cont-init.d/ -type f -name '*.sh' | sort) ; do bash $script ; done
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
          resources:
            {}
        volumes:
        - name: installinfo
          configMap:
            name: datadog-installinfo
        - name: config
          emptyDir: {}
        - hostPath:
            path: /var/run
          name: runtimesocketdir
          
        - name: tmpdir
          emptyDir: {}
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
        - name: s6-run
          emptyDir: {}
        - hostPath:
            path: /etc/passwd
          name: passwd
        - hostPath:
            path: "/var/lib/datadog-agent/logs"
          name: pointerdir
        - hostPath:
            path: /var/log/pods
          name: logpodpath
        - hostPath:
            path: /var/lib/docker/containers
          name: logdockercontainerpath
        tolerations:
        affinity:
          {}
        serviceAccountName: datadog
        nodeSelector:
          kubernetes.io/os: linux
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 10%
      type: RollingUpdate
datadog, datadog-cluster-agent, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

********************

	Release was not present in Helm.  Diff will show entire contents as new.

********************
kube-system, private-traefik, ServiceAccount (v1) has been added:
- 
+ # Source: traefik/templates/rbac/serviceaccount.yaml
+ kind: ServiceAccount
+ apiVersion: v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+   annotations:
kube-system, private-traefik, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrole.yaml
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ rules:
+   - apiGroups:
+       - ""
+     resources:
+       - services
+       - endpoints
+       - secrets
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses
+       - ingressclasses
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses/status
+     verbs:
+       - update
+   - apiGroups:
+       - traefik.containo.us
+     resources:
+       - ingressroutes
+       - ingressroutetcps
+       - ingressrouteudps
+       - middlewares
+       - tlsoptions
+       - tlsstores
+       - traefikservices
+       - serverstransports
+     verbs:
+       - get
+       - list
+       - watch
kube-system, private-traefik, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrolebinding.yaml
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: private-traefik
+ subjects:
+   - kind: ServiceAccount
+     name: private-traefik
+     namespace: kube-system
kube-system, private-traefik, Deployment (apps) has been added:
- 
+ # Source: traefik/templates/deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+   annotations:
+ spec:
+   replicas: 1
+   selector:
+     matchLabels:
+       app.kubernetes.io/name: traefik
+       app.kubernetes.io/instance: private-traefik
+   strategy:
+     type: RollingUpdate
+     rollingUpdate:
+       maxSurge: 1
+       maxUnavailable: 1
+   template: 
+     metadata:
+       annotations:
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: private-traefik
+     spec:
+       serviceAccountName: private-traefik
+       terminationGracePeriodSeconds: 60
+       hostNetwork: false
+       containers:
+       - image: "traefik:2.4.2"
+         imagePullPolicy: IfNotPresent
+         name: private-traefik
+         resources:
+         readinessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 1
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         livenessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 3
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         ports:
+         - name: "traefik"
+           containerPort: 9000
+           protocol: "TCP"
+         - name: "web"
+           containerPort: 8000
+           protocol: "TCP"
+         - name: "websecure"
+           containerPort: 8443
+           protocol: "TCP"
+         securityContext:
+           capabilities:
+             drop:
+             - ALL
+           readOnlyRootFilesystem: true
+           runAsGroup: 65532
+           runAsNonRoot: true
+           runAsUser: 65532
+         volumeMounts:
+           - name: data
+             mountPath: /data
+           - name: tmp
+             mountPath: /tmp
+         args:
+           - "--entryPoints.traefik.address=:9000/tcp"
+           - "--entryPoints.web.address=:8000/tcp"
+           - "--entryPoints.websecure.address=:8443/tcp"
+           - "--api.dashboard=true"
+           - "--ping=true"
+           - "--providers.kubernetescrd"
+           - "--providers.kubernetesingress"
+           - "--providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/private-traefik"
+           - "--log.format=json"
+           - "--accesslog=true"
+           - "--accesslog.format=json"
+           - "--accesslog.fields.defaultmode=keep"
+           - "--accesslog.fields.headers.defaultmode=drop"
+           - "--providers.kubernetescrd.ingressclass=traefik"
+           - "--providers.kubernetesingress.ingressclass=traefik"
+       volumes:
+         - name: data
+           emptyDir: {}
+         - name: tmp
+           emptyDir: {}
+       securityContext:
+         fsGroup: 65532
kube-system, , List (v1) has been added:
- 
+ # Source: traefik/templates/service.yaml
+ apiVersion: v1
+ kind: List
+ items:
+   - apiVersion: v1
+     kind: Service
+     metadata:
+       name: private-traefik
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: private-traefik
+       annotations:
+         service.beta.kubernetes.io/azure-load-balancer-internal: true
+         service.beta.kubernetes.io/azure-load-balancer-internal-subnet: data-tier
+     spec:
+       type: LoadBalancer
+       externalTrafficPolicy: Local
+       selector:
+         app.kubernetes.io/name: traefik
+         app.kubernetes.io/instance: private-traefik
+       ports:
+       - port: 80
+         name: web
+         targetPort: "web"
+         protocol: "TCP"
+       - port: 443
+         name: websecure
+         targetPort: "websecure"
+         protocol: "TCP"
kube-system, default, TLSOption (traefik.containo.us) has been added:
- 
+ # Source: traefik/templates/tlsoption.yaml
+ apiVersion: traefik.containo.us/v1alpha1
+ kind: TLSOption
+ metadata:
+   name: default
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ spec:
+   cipherSuites:
+   - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
+   - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
+   minVersion: VersionTLS12

********************

	Release was not present in Helm.  Diff will show entire contents as new.

********************
kube-system, public-traefik, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrole.yaml
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ rules:
+   - apiGroups:
+       - ""
+     resources:
+       - services
+       - endpoints
+       - secrets
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses
+       - ingressclasses
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses/status
+     verbs:
+       - update
+   - apiGroups:
+       - traefik.containo.us
+     resources:
+       - ingressroutes
+       - ingressroutetcps
+       - ingressrouteudps
+       - middlewares
+       - tlsoptions
+       - tlsstores
+       - traefikservices
+       - serverstransports
+     verbs:
+       - get
+       - list
+       - watch
kube-system, public-traefik, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrolebinding.yaml
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: public-traefik
+ subjects:
+   - kind: ServiceAccount
+     name: public-traefik
+     namespace: kube-system
kube-system, public-traefik, Deployment (apps) has been added:
- 
+ # Source: traefik/templates/deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+   annotations:
+ spec:
+   replicas: 1
+   selector:
+     matchLabels:
+       app.kubernetes.io/name: traefik
+       app.kubernetes.io/instance: public-traefik
+   strategy:
+     type: RollingUpdate
+     rollingUpdate:
+       maxSurge: 1
+       maxUnavailable: 1
+   template: 
+     metadata:
+       annotations:
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: public-traefik
+     spec:
+       serviceAccountName: public-traefik
+       terminationGracePeriodSeconds: 60
+       hostNetwork: false
+       containers:
+       - image: "traefik:2.4.2"
+         imagePullPolicy: IfNotPresent
+         name: public-traefik
+         resources:
+         readinessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 1
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         livenessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 3
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         ports:
+         - name: "traefik"
+           containerPort: 9000
+           protocol: "TCP"
+         - name: "web"
+           containerPort: 8000
+           protocol: "TCP"
+         - name: "websecure"
+           containerPort: 8443
+           protocol: "TCP"
+         securityContext:
+           capabilities:
+             drop:
+             - ALL
+           readOnlyRootFilesystem: true
+           runAsGroup: 65532
+           runAsNonRoot: true
+           runAsUser: 65532
+         volumeMounts:
+           - name: data
+             mountPath: /data
+           - name: tmp
+             mountPath: /tmp
+         args:
+           - "--entryPoints.traefik.address=:9000/tcp"
+           - "--entryPoints.web.address=:8000/tcp"
+           - "--entryPoints.websecure.address=:8443/tcp"
+           - "--api.dashboard=true"
+           - "--ping=true"
+           - "--providers.kubernetescrd"
+           - "--providers.kubernetesingress"
+           - "--providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/public-traefik"
+           - "--log.format=json"
+           - "--accesslog=true"
+           - "--accesslog.format=json"
+           - "--accesslog.fields.defaultmode=keep"
+           - "--accesslog.fields.headers.defaultmode=drop"
+           - "--providers.kubernetescrd.ingressclass=public-traefik"
+           - "--providers.kubernetesingress.ingressclass=public-traefik"
+       volumes:
+         - name: data
+           emptyDir: {}
+         - name: tmp
+           emptyDir: {}
+       securityContext:
+         fsGroup: 65532
kube-system, , List (v1) has been added:
- 
+ # Source: traefik/templates/service.yaml
+ apiVersion: v1
+ kind: List
+ items:
+   - apiVersion: v1
+     kind: Service
+     metadata:
+       name: public-traefik
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: public-traefik
+       annotations:
+         loadBalancerIP: 20.65.25.172
+         service.beta.kubernetes.io/azure-load-balancer-internal: false
+         service.beta.kubernetes.io/azure-load-balancer-internal-subnet: app-tier
+         service.beta.kubernetes.io/azure-load-balancer-resource-group: prodpublick8s
+     spec:
+       type: LoadBalancer
+       externalTrafficPolicy: Local
+       selector:
+         app.kubernetes.io/name: traefik
+         app.kubernetes.io/instance: public-traefik
+       ports:
+       - port: 80
+         name: web
+         targetPort: "web"
+         protocol: "TCP"
+       - port: 443
+         name: websecure
+         targetPort: "websecure"
+         protocol: "TCP"
kube-system, default, TLSOption (traefik.containo.us) has been added:
- 
+ # Source: traefik/templates/tlsoption.yaml
+ apiVersion: traefik.containo.us/v1alpha1
+ kind: TLSOption
+ metadata:
+   name: default
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ spec:
+   cipherSuites:
+   - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
+   - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
+   minVersion: VersionTLS12
kube-system, public-traefik, ServiceAccount (v1) has been added:
- 
+ # Source: traefik/templates/rbac/serviceaccount.yaml
+ kind: ServiceAccount
+ apiVersion: v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+   annotations:

grafana, grafana, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
grafana, grafana, StatefulSet (apps) has changed:
  # Source: grafana/templates/statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: grafana
    namespace: grafana
    labels:
      helm.sh/chart: grafana-6.4.2
      app.kubernetes.io/name: grafana
      app.kubernetes.io/instance: grafana
      app.kubernetes.io/version: "7.4.1"
      app.kubernetes.io/managed-by: Helm
  spec:
    replicas: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: grafana
        app.kubernetes.io/instance: grafana
    serviceName: grafana-headless
    template:
      metadata:
        labels:
          app.kubernetes.io/name: grafana
          app.kubernetes.io/instance: grafana
        annotations:
          checksum/config: 874954ca76d5c65fc3c7c6b67582857af0bf0b713b33fdb668a5c610c9e8707a
          checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
-         checksum/secret: 13fd618d2c901f899c82825e66c7d60e146636855646bf3625f6d9fb0c81cf43
+         checksum/secret: e59201f97ad54dd0f28d80d0016781130b600c197ba369acdfb0d7d29d5dfe6f
      spec:
        
        serviceAccountName: grafana
        securityContext:
          fsGroup: 472
          runAsGroup: 472
          runAsUser: 472
        initContainers:
          - name: init-chown-data
            image: "busybox:1.31.1"
            imagePullPolicy: IfNotPresent
            securityContext:
              runAsNonRoot: false
              runAsUser: 0
            command: ["chown", "-R", "472:472", "/var/lib/grafana"]
            resources:
              {}
            volumeMounts:
              - name: storage
                mountPath: "/var/lib/grafana"
        containers:
          - name: grafana
            image: "grafana/grafana:7.4.1"
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - name: config
                mountPath: "/etc/grafana/grafana.ini"
                subPath: grafana.ini
              - name: ldap
                mountPath: "/etc/grafana/ldap.toml"
                subPath: ldap.toml
              - name: storage
                mountPath: "/var/lib/grafana"
              - name: config
                mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
                subPath: datasources.yaml
            ports:
              - name: service
                containerPort: 80
                protocol: TCP
              - name: grafana
                containerPort: 3000
                protocol: TCP
            env:
              - name: GF_SECURITY_ADMIN_USER
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-user
              - name: GF_SECURITY_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-password
              
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /api/health
                port: 3000
              initialDelaySeconds: 60
              timeoutSeconds: 30
            readinessProbe:
              httpGet:
                path: /api/health
                port: 3000
            resources:
              limits:
                cpu: 200m
                memory: 256Mi
              requests:
                cpu: 100m
                memory: 128Mi
        volumes:
          - name: config
            configMap:
              name: grafana
          - name: ldap
            secret:
              secretName: grafana
              items:
                - key: ldap-toml
                  path: ldap.toml
        # nothing
    volumeClaimTemplates:
    - metadata:
        name: storage
      spec:
        accessModes: [ReadWriteOnce]
        storageClassName: 
        resources:
          requests:
            storage: 50

jenkins-infra, jenkins-infra, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

release, default-release-jenkins, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

@jetersen
Copy link
Contributor Author

@dduportal 🚀

Copy link
Contributor

@dduportal dduportal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@infra-ci-jenkins-io
Copy link

datadog, datadog, DaemonSet (apps) has changed:
  # Source: datadog/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: datadog
    labels:
      helm.sh/chart: "datadog-2.9.1"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    selector:
      matchLabels:
        app: datadog
    template:
      metadata:
        labels:
          app: datadog
        name: datadog
        annotations:
-         checksum/clusteragent_token: f8beab7fd8739b4416446f72006a2a3526abef40b56c45619fd95f3f98a709fd
+         checksum/clusteragent_token: 474eb5e1f99a967bf01e1f393ab05405568e570a9c5f450043142490224a9fcc
          checksum/api_key: 9a57d7f6fed50c5f314da726bb0e088b1e822224f3eaacaada8e0c8f0a99af5e
          checksum/install_info: 164febc2c991090ad639b305d9e2215150edbc9c91ea8a65e9f39d7740a32575
          checksum/autoconf-config: 74234e98afe7498fb5daf1f36ac2d78acc339464f950703b8c019892f982b90b
          checksum/confd-config: 696ae73cdb6b680c8def7d63abd88415ec0161a9c3e1939c0ef8e9fc05300289
          checksum/checksd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
      spec:
        containers:
        - name: agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["agent", "run"]
          resources:
            {}
          ports:
          - containerPort: 8125
            name: dogstatsdport
            protocol: UDP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_DOGSTATSD_PORT
              value: "8125"
            - name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_APM_ENABLED
              value: "false"
            - name: DD_LOGS_ENABLED
              value: "true"
            - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
              value: "false"
            - name: DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE
              value: "true"
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "clusterchecks endpointschecks"
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: pointerdir
              mountPath: /opt/datadog-agent/run
              mountPropagation: None
            - name: logpodpath
              mountPath: /var/log/pods
              mountPropagation: None
              readOnly: true
            - name: logdockercontainerpath
              mountPath: /var/lib/docker/containers
              mountPropagation: None
              readOnly: true
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
        - name: trace-agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["trace-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          ports:
          - containerPort: 8126
            hostPort: 8126
            name: traceport
            protocol: TCP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_APM_ENABLED
              value: "true"
            - name: DD_APM_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_APM_RECEIVER_PORT
              value: "8126"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          livenessProbe:
            initialDelaySeconds: 15
            periodSeconds: 15
            tcpSocket:
              port: 8126
            timeoutSeconds: 5
        - name: process-agent
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["process-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_PROCESS_AGENT_ENABLED
              value: "true"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_SYSTEM_PROBE_ENABLED
              value: "false"
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_CLUSTER_ID
              valueFrom:
                configMapKeyRef:
                  name: datadog-cluster-id
                  key: id
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: passwd
              mountPath: /etc/passwd
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
        initContainers:
            
        - name: init-volume
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - cp -r /etc/datadog-agent /opt
          volumeMounts:
            - name: config
              mountPath: /opt/datadog-agent
          resources:
            {}
        - name: init-config
          image: "jenkinsciinfra/datadog@sha256:a7cd977cb74f7349d3459f578d81195b181ae05721faa74782c648a20bb2acc9"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - for script in $(find /etc/cont-init.d/ -type f -name '*.sh' | sort) ; do bash $script ; done
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: confd
              mountPath: /conf.d
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
          resources:
            {}
        volumes:
        - name: installinfo
          configMap:
            name: datadog-installinfo
        - name: config
          emptyDir: {}
        - hostPath:
            path: /var/run
          name: runtimesocketdir
          
        - name: tmpdir
          emptyDir: {}
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
        - name: s6-run
          emptyDir: {}
        - name: confd
          configMap:
            name: datadog-confd
        - hostPath:
            path: /etc/passwd
          name: passwd
        - hostPath:
            path: "/var/lib/datadog-agent/logs"
          name: pointerdir
        - hostPath:
            path: /var/log/pods
          name: logpodpath
        - hostPath:
            path: /var/lib/docker/containers
          name: logdockercontainerpath
        tolerations:
        affinity:
          {}
        serviceAccountName: datadog
        nodeSelector:
          kubernetes.io/os: linux
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 10%
      type: RollingUpdate
datadog, datadog-cluster-agent, Deployment (apps) has changed:
  # Source: datadog/templates/cluster-agent-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: datadog-cluster-agent
    labels:
      helm.sh/chart: "datadog-2.9.1"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    replicas: 1
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    selector:
      matchLabels:
        app: datadog-cluster-agent
    template:
      metadata:
        labels:
          app: datadog-cluster-agent
        name: datadog-cluster-agent
        annotations:
-         checksum/clusteragent_token: ec6c933122ef23e9e591b97d90c5833a390abb6e5b5f055cde463f56c160929b
+         checksum/clusteragent_token: 71b7598a0176c0ca248a1c25ea1a1a0d1a54e03b73a0813e308500c9a5f52275
          checksum/api_key: 9a57d7f6fed50c5f314da726bb0e088b1e822224f3eaacaada8e0c8f0a99af5e
          checksum/application_key: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/install_info: 164febc2c991090ad639b305d9e2215150edbc9c91ea8a65e9f39d7740a32575
          ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
          ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
          ad.datadoghq.com/cluster-agent.instances: |
            [{
              "prometheus_url": "http://%%host%%:5000/metrics",
              "namespace": "datadog.cluster_agent",
              "metrics": [
                "go_goroutines", "go_memstats_*", "process_*",
                "api_requests",
                "datadog_requests", "external_metrics", "rate_limit_queries_*",
                "cluster_checks_*"
              ]
            }]
  
      spec:
        serviceAccountName: datadog-cluster-agent
        containers:
        - name: cluster-agent
          image: "gcr.io/datadoghq/cluster-agent:1.10.0"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          ports:
          - containerPort: 5005
            name: agentport
            protocol: TCP
          env:
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
                  optional: true
            - name: DD_CLUSTER_CHECKS_ENABLED
              value: "true"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "kube_endpoints kube_services"
            - name: DD_EXTRA_LISTENERS
              value: "kube_endpoints kube_services"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_LEADER_ELECTION
              value: "true"
            - name: DD_LEADER_LEASE_DURATION
              value: "60"
            - name: DD_COLLECT_KUBERNETES_EVENTS
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: datadog-cluster-agent
                  key: token
            - name: DD_KUBE_RESOURCES_NAMESPACE
              value: datadog
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
              value: "true"
            - name: DD_COMPLIANCE_CONFIG_ENABLED
              value:  "false"
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
        volumes:
          - name: installinfo
            configMap:
              name: datadog-installinfo
        nodeSelector:
          kubernetes.io/os: linux
datadog, datadog-cluster-agent, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

********************

	Release was not present in Helm.  Diff will show entire contents as new.

********************
kube-system, private-traefik, ServiceAccount (v1) has been added:
- 
+ # Source: traefik/templates/rbac/serviceaccount.yaml
+ kind: ServiceAccount
+ apiVersion: v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+   annotations:
kube-system, private-traefik, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrole.yaml
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ rules:
+   - apiGroups:
+       - ""
+     resources:
+       - services
+       - endpoints
+       - secrets
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses
+       - ingressclasses
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses/status
+     verbs:
+       - update
+   - apiGroups:
+       - traefik.containo.us
+     resources:
+       - ingressroutes
+       - ingressroutetcps
+       - ingressrouteudps
+       - middlewares
+       - tlsoptions
+       - tlsstores
+       - traefikservices
+       - serverstransports
+     verbs:
+       - get
+       - list
+       - watch
kube-system, private-traefik, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrolebinding.yaml
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: private-traefik
+ subjects:
+   - kind: ServiceAccount
+     name: private-traefik
+     namespace: kube-system
kube-system, private-traefik, Deployment (apps) has been added:
- 
+ # Source: traefik/templates/deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+   name: private-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+   annotations:
+ spec:
+   replicas: 1
+   selector:
+     matchLabels:
+       app.kubernetes.io/name: traefik
+       app.kubernetes.io/instance: private-traefik
+   strategy:
+     type: RollingUpdate
+     rollingUpdate:
+       maxSurge: 1
+       maxUnavailable: 1
+   template: 
+     metadata:
+       annotations:
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: private-traefik
+     spec:
+       serviceAccountName: private-traefik
+       terminationGracePeriodSeconds: 60
+       hostNetwork: false
+       containers:
+       - image: "traefik:2.4.2"
+         imagePullPolicy: IfNotPresent
+         name: private-traefik
+         resources:
+         readinessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 1
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         livenessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 3
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         ports:
+         - name: "traefik"
+           containerPort: 9000
+           protocol: "TCP"
+         - name: "web"
+           containerPort: 8000
+           protocol: "TCP"
+         - name: "websecure"
+           containerPort: 8443
+           protocol: "TCP"
+         securityContext:
+           capabilities:
+             drop:
+             - ALL
+           readOnlyRootFilesystem: true
+           runAsGroup: 65532
+           runAsNonRoot: true
+           runAsUser: 65532
+         volumeMounts:
+           - name: data
+             mountPath: /data
+           - name: tmp
+             mountPath: /tmp
+         args:
+           - "--entryPoints.traefik.address=:9000/tcp"
+           - "--entryPoints.web.address=:8000/tcp"
+           - "--entryPoints.websecure.address=:8443/tcp"
+           - "--api.dashboard=true"
+           - "--ping=true"
+           - "--providers.kubernetescrd"
+           - "--providers.kubernetesingress"
+           - "--providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/private-traefik"
+           - "--log.format=json"
+           - "--accesslog=true"
+           - "--accesslog.format=json"
+           - "--accesslog.fields.defaultmode=keep"
+           - "--accesslog.fields.headers.defaultmode=drop"
+           - "--providers.kubernetescrd.ingressclass=traefik"
+           - "--providers.kubernetesingress.ingressclass=traefik"
+       volumes:
+         - name: data
+           emptyDir: {}
+         - name: tmp
+           emptyDir: {}
+       securityContext:
+         fsGroup: 65532
kube-system, , List (v1) has been added:
- 
+ # Source: traefik/templates/service.yaml
+ apiVersion: v1
+ kind: List
+ items:
+   - apiVersion: v1
+     kind: Service
+     metadata:
+       name: private-traefik
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: private-traefik
+       annotations:
+         service.beta.kubernetes.io/azure-load-balancer-internal: true
+         service.beta.kubernetes.io/azure-load-balancer-internal-subnet: data-tier
+     spec:
+       type: LoadBalancer
+       externalTrafficPolicy: Local
+       selector:
+         app.kubernetes.io/name: traefik
+         app.kubernetes.io/instance: private-traefik
+       ports:
+       - port: 80
+         name: web
+         targetPort: "web"
+         protocol: "TCP"
+       - port: 443
+         name: websecure
+         targetPort: "websecure"
+         protocol: "TCP"
kube-system, default, TLSOption (traefik.containo.us) has been added:
- 
+ # Source: traefik/templates/tlsoption.yaml
+ apiVersion: traefik.containo.us/v1alpha1
+ kind: TLSOption
+ metadata:
+   name: default
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: private-traefik
+ spec:
+   cipherSuites:
+   - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
+   - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
+   minVersion: VersionTLS12

********************

	Release was not present in Helm.  Diff will show entire contents as new.

********************
kube-system, public-traefik, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrolebinding.yaml
+ kind: ClusterRoleBinding
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: public-traefik
+ subjects:
+   - kind: ServiceAccount
+     name: public-traefik
+     namespace: kube-system
kube-system, public-traefik, Deployment (apps) has been added:
- 
+ # Source: traefik/templates/deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+   annotations:
+ spec:
+   replicas: 1
+   selector:
+     matchLabels:
+       app.kubernetes.io/name: traefik
+       app.kubernetes.io/instance: public-traefik
+   strategy:
+     type: RollingUpdate
+     rollingUpdate:
+       maxSurge: 1
+       maxUnavailable: 1
+   template: 
+     metadata:
+       annotations:
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: public-traefik
+     spec:
+       serviceAccountName: public-traefik
+       terminationGracePeriodSeconds: 60
+       hostNetwork: false
+       containers:
+       - image: "traefik:2.4.2"
+         imagePullPolicy: IfNotPresent
+         name: public-traefik
+         resources:
+         readinessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 1
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         livenessProbe:
+           httpGet:
+             path: /ping
+             port: 9000
+           failureThreshold: 3
+           initialDelaySeconds: 10
+           periodSeconds: 10
+           successThreshold: 1
+           timeoutSeconds: 2
+         ports:
+         - name: "traefik"
+           containerPort: 9000
+           protocol: "TCP"
+         - name: "web"
+           containerPort: 8000
+           protocol: "TCP"
+         - name: "websecure"
+           containerPort: 8443
+           protocol: "TCP"
+         securityContext:
+           capabilities:
+             drop:
+             - ALL
+           readOnlyRootFilesystem: true
+           runAsGroup: 65532
+           runAsNonRoot: true
+           runAsUser: 65532
+         volumeMounts:
+           - name: data
+             mountPath: /data
+           - name: tmp
+             mountPath: /tmp
+         args:
+           - "--entryPoints.traefik.address=:9000/tcp"
+           - "--entryPoints.web.address=:8000/tcp"
+           - "--entryPoints.websecure.address=:8443/tcp"
+           - "--api.dashboard=true"
+           - "--ping=true"
+           - "--providers.kubernetescrd"
+           - "--providers.kubernetesingress"
+           - "--providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/public-traefik"
+           - "--log.format=json"
+           - "--accesslog=true"
+           - "--accesslog.format=json"
+           - "--accesslog.fields.defaultmode=keep"
+           - "--accesslog.fields.headers.defaultmode=drop"
+           - "--providers.kubernetescrd.ingressclass=public-traefik"
+           - "--providers.kubernetesingress.ingressclass=public-traefik"
+       volumes:
+         - name: data
+           emptyDir: {}
+         - name: tmp
+           emptyDir: {}
+       securityContext:
+         fsGroup: 65532
kube-system, , List (v1) has been added:
- 
+ # Source: traefik/templates/service.yaml
+ apiVersion: v1
+ kind: List
+ items:
+   - apiVersion: v1
+     kind: Service
+     metadata:
+       name: public-traefik
+       labels:
+         app.kubernetes.io/name: traefik
+         helm.sh/chart: traefik-9.14.2
+         app.kubernetes.io/managed-by: Helm
+         app.kubernetes.io/instance: public-traefik
+       annotations:
+         loadBalancerIP: 20.65.25.172
+         service.beta.kubernetes.io/azure-load-balancer-internal: false
+         service.beta.kubernetes.io/azure-load-balancer-internal-subnet: app-tier
+         service.beta.kubernetes.io/azure-load-balancer-resource-group: prodpublick8s
+     spec:
+       type: LoadBalancer
+       externalTrafficPolicy: Local
+       selector:
+         app.kubernetes.io/name: traefik
+         app.kubernetes.io/instance: public-traefik
+       ports:
+       - port: 80
+         name: web
+         targetPort: "web"
+         protocol: "TCP"
+       - port: 443
+         name: websecure
+         targetPort: "websecure"
+         protocol: "TCP"
kube-system, default, TLSOption (traefik.containo.us) has been added:
- 
+ # Source: traefik/templates/tlsoption.yaml
+ apiVersion: traefik.containo.us/v1alpha1
+ kind: TLSOption
+ metadata:
+   name: default
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ spec:
+   cipherSuites:
+   - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+   - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+   - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
+   - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
+   minVersion: VersionTLS12
kube-system, public-traefik, ServiceAccount (v1) has been added:
- 
+ # Source: traefik/templates/rbac/serviceaccount.yaml
+ kind: ServiceAccount
+ apiVersion: v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+   annotations:
kube-system, public-traefik, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: traefik/templates/rbac/clusterrole.yaml
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+   name: public-traefik
+   labels:
+     app.kubernetes.io/name: traefik
+     helm.sh/chart: traefik-9.14.2
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/instance: public-traefik
+ rules:
+   - apiGroups:
+       - ""
+     resources:
+       - services
+       - endpoints
+       - secrets
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses
+       - ingressclasses
+     verbs:
+       - get
+       - list
+       - watch
+   - apiGroups:
+       - extensions
+       - networking.k8s.io
+     resources:
+       - ingresses/status
+     verbs:
+       - update
+   - apiGroups:
+       - traefik.containo.us
+     resources:
+       - ingressroutes
+       - ingressroutetcps
+       - ingressrouteudps
+       - middlewares
+       - tlsoptions
+       - tlsstores
+       - traefikservices
+       - serverstransports
+     verbs:
+       - get
+       - list
+       - watch

grafana, grafana, StatefulSet (apps) has changed:
  # Source: grafana/templates/statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: grafana
    namespace: grafana
    labels:
      helm.sh/chart: grafana-6.4.4
      app.kubernetes.io/name: grafana
      app.kubernetes.io/instance: grafana
      app.kubernetes.io/version: "7.4.2"
      app.kubernetes.io/managed-by: Helm
  spec:
    replicas: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: grafana
        app.kubernetes.io/instance: grafana
    serviceName: grafana-headless
    template:
      metadata:
        labels:
          app.kubernetes.io/name: grafana
          app.kubernetes.io/instance: grafana
        annotations:
          checksum/config: 1680303ccd03846ce9818803048ed1a7abe052869b8d40c59ad43f7134800b30
          checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
-         checksum/secret: 9a942fb673a56f7e7dba4c3ea5e6b728d72939b5657459a790f461d0ed50b599
+         checksum/secret: 5af247a6d15db4c1f72e3de87e996bfd84f8b855c0fdd2619b1d37a26b3d0096
      spec:
        
        serviceAccountName: grafana
        securityContext:
          fsGroup: 472
          runAsGroup: 472
          runAsUser: 472
        initContainers:
          - name: init-chown-data
            image: "busybox:1.31.1"
            imagePullPolicy: IfNotPresent
            securityContext:
              runAsNonRoot: false
              runAsUser: 0
            command: ["chown", "-R", "472:472", "/var/lib/grafana"]
            resources:
              {}
            volumeMounts:
              - name: storage
                mountPath: "/var/lib/grafana"
        containers:
          - name: grafana
            image: "grafana/grafana:7.4.2"
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - name: config
                mountPath: "/etc/grafana/grafana.ini"
                subPath: grafana.ini
              - name: ldap
                mountPath: "/etc/grafana/ldap.toml"
                subPath: ldap.toml
              - name: storage
                mountPath: "/var/lib/grafana"
              - name: config
                mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
                subPath: datasources.yaml
            ports:
              - name: service
                containerPort: 80
                protocol: TCP
              - name: grafana
                containerPort: 3000
                protocol: TCP
            env:
              - name: GF_SECURITY_ADMIN_USER
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-user
              - name: GF_SECURITY_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-password
              
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /api/health
                port: 3000
              initialDelaySeconds: 60
              timeoutSeconds: 30
            readinessProbe:
              httpGet:
                path: /api/health
                port: 3000
            resources:
              limits:
                cpu: 200m
                memory: 256Mi
              requests:
                cpu: 100m
                memory: 128Mi
        volumes:
          - name: config
            configMap:
              name: grafana
          - name: ldap
            secret:
              secretName: grafana
              items:
                - key: ldap-toml
                  path: ldap.toml
        # nothing
    volumeClaimTemplates:
    - metadata:
        name: storage
      spec:
        accessModes: [ReadWriteOnce]
        storageClassName: 
        resources:
          requests:
            storage: 50
grafana, grafana, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

jenkins-infra, jenkins-infra, ConfigMap (v1) has changed:
  # Source: jenkins/charts/jenkins/templates/config.yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: jenkins-infra
    namespace: jenkins-infra
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "jenkins-infra"
      "app.kubernetes.io/component": "jenkins-controller"
  data:
    apply_config.sh: |-
      set -e
      echo "disable Setup Wizard"
      # Prevent Setup Wizard when JCasC is enabled
      echo $JENKINS_VERSION > /var/jenkins_home/jenkins.install.UpgradeWizard.state
      echo $JENKINS_VERSION > /var/jenkins_home/jenkins.install.InstallUtil.lastExecVersion
      echo "remove all plugins from shared volume"
      # remove all plugins from shared volume
      rm -rf /var/jenkins_home/plugins/*
      echo "download plugins"
      # Install missing plugins
      cp /var/jenkins_config/plugins.txt /var/jenkins_home;
      rm -rf /usr/share/jenkins/ref/plugins/*.lock
      version () { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }
      if [ -f "/usr/share/jenkins/jenkins.war" ] && [ -n "$(command -v jenkins-plugin-cli)" 2>/dev/null ] && [ $(version $(jenkins-plugin-cli --version)) -ge $(version "2.1.1") ]; then
        jenkins-plugin-cli --war "/usr/share/jenkins/jenkins.war" --plugin-file "/var/jenkins_home/plugins.txt";
      else
        /usr/local/bin/install-plugins.sh `echo $(cat /var/jenkins_home/plugins.txt)`;
      fi
      echo "copy plugins to shared volume"
      # Copy plugins to shared volume
      yes n | cp -i /usr/share/jenkins/ref/plugins/* /var/jenkins_plugins/;
      echo "finished initialization"
    plugins.txt: |-
      ansicolor
      antisamy-markup-formatter
      authentication-tokens
      basic-branch-build-strategies
      blueocean
      blueocean-autofavorite
      blueocean-commons
      blueocean-config
      blueocean-core-js
      blueocean-dashboard
      blueocean-display-url
      blueocean-events
      blueocean-git-pipeline
      blueocean-github-pipeline
      blueocean-i18n
      blueocean-jira
      blueocean-jwt
      blueocean-personalization
      blueocean-pipeline-api-impl
      blueocean-pipeline-editor
      blueocean-pipeline-scm-api
      blueocean-rest
      blueocean-rest-impl
      blueocean-web
      branch-api
      build-name-setter
      config-file-provider
      configuration-as-code
      credentials
      credentials-binding
      extended-read-permission
      git
      git-client
      github
      github-api
      github-branch-source
      github-checks
      github-label-filter
      inline-pipeline
      jira
      job-dsl
      junit
      kubernetes
-     ldap:2.3
+     ldap
      lockable-resources
      pipeline-utility-steps
      matrix-auth
      matrix-project
      pipeline-build-step
      pipeline-github
      pipeline-graph-analysis
      pipeline-input-step
      pipeline-milestone-step
      pipeline-model-api
      pipeline-model-definition
      pipeline-model-extensions
      pipeline-rest-api
      pipeline-stage-step
      pipeline-stage-tags-metadata
      pipeline-stage-view
      plain-credentials
      prometheus
      scm-api
      scm-filter-branch-pr
      script-security
      ssh-agent
      ssh-credentials
      support-core
      token-macro
      variant
      warnings-ng
      workflow-aggregator
      workflow-api
      workflow-basic-steps
      workflow-cps
      workflow-cps-global-lib
      workflow-durable-task-step
      workflow-job
      workflow-multibranch
      workflow-scm-step
      workflow-step-api
      workflow-support
      javadoc
      metrics
jenkins-infra, jenkins-infra, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: jenkins-infra
    namespace: jenkins-infra
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.1.13"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "jenkins-infra"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: jenkins-infra
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "jenkins-infra"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "jenkins-infra"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
-         checksum/config: 067630008e2c841cb80940b7408bb7a3df1db90f228ddd2db9507d7f1256dea2
+         checksum/config: b39759a9fc38a2ff13a970ae0dd0bf3c0e17f45253b207b93d211c3722c04682
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.281-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: 2000m
                memory: 4096Mi
              requests:
                cpu: 50m
                memory: 256Mi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.281-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) 
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: 2000m
                memory: 4096Mi
              requests:
                cpu: 50m
                memory: 256Mi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "jenkins-infra-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'jenkins-infra'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: jenkins-infra
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-infra
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: jenkins-infra
jenkins-infra, jenkins-infra, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

release, default-release-jenkins, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

@dduportal dduportal merged commit 9650a40 into jenkins-infra:master Feb 26, 2021
@dduportal
Copy link
Contributor

For information, the master branch is failing with this PR merged with the following error message:

Listing releases matching ^private-traefik$
private-traefik	kube-system	1       	2021-02-26 10:04:44.45484069 +0000 UTC	deployed	traefik-9.14.2	2.4.2      


UPDATED RELEASES:
NAME              CHART             VERSION
private-traefik   traefik/traefik    9.14.2


FAILED RELEASES:
NAME
public-traefik
in clusters/publick8s.yaml: in .helmfiles[3]: in ../helmfile.d/traefik.yaml: failed processing release public-traefik: the following cmd exited with status 1:
  /usr/local/bin/helm helm upgrade --install --reset-values public-traefik traefik/traefik --version 9.14.2 --wait --timeout 300s --atomic --create-namespace --namespace kube-system --values /tmp/values429240333 --values /tmp/values793134088 --values /tmp/values563354567 --history-max 10

  Error: rendered manifests contain a resource that already exists. Unable to continue with install: TLSOption "default" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "public-traefik": current value is "private-traefik"

WiP on fixing this, upcoming PR

@dduportal
Copy link
Contributor

dduportal commented Feb 26, 2021

Thanks @jetersen for this awesome work! We only fixed a typo in #903 , but it works well for the other stuff 👍

@jetersen
Copy link
Contributor Author

@dduportal sorry my bad 😓

@dduportal
Copy link
Contributor

Oh don't be sorry for this, we all reviewed and decided to merge. You did the heavy lifting and it really is nice!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants