New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean install of a public helm chart results in numerous Modified / Unknown resources #189
Comments
Just an FYI, rancher 2.5.4-rc5 generates this type of modification set, note that nodeAffinity and podAffinity were never supplied in the original install and are not listed when i run
|
Here is a related issue for posterity: #124 |
This issue can easily be reproduced by installing a YAML manifest with fleet that contains The Helm chart mentioned by OP renders manifest like below (simplified). Note the null values in the
Installing this manifest will result in the resource being stuck in modified state: Rancher issue ref: rancher/rancher#30696 |
Running into the same issue while trying to install Longhorn using Fleet. It'd be a very simple Helm based install, but it's stuck in The error is:
My fleet.yaml is: defaultNamespace: longhorn-system
helm:
chart: longhorn
repo: https://charts.longhorn.io
version: 1.1.0 |
I think this is also happening to me for a resource that has creationTimestamp of null. I've made sure to specify the rest of the persistentVolumeClaimTemplate object. Click to expand...~|⇒ (⎈ |takeout:default) k get statefulsets.apps electrumx -o yaml [4s] [2021/02/04|17:28] apiVersion: apps/v1 kind: StatefulSet metadata: annotations: field.cattle.io/publicEndpoints: '[{"addresses":["192.168.1.63"],"port":50002,"protocol":"TCP","serviceName":"default:electrumx","allNodes":false}]' meta.helm.sh/release-name: apps meta.helm.sh/release-namespace: default objectset.rio.cattle.io/id: default-apps creationTimestamp: "2021-02-04T22:21:54Z" generation: 1 labels: app: electrumx app.kubernetes.io/managed-by: Helm objectset.rio.cattle.io/hash: 48a9d76eea6dc292a0244df36318f37481348de9 managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:objectset.rio.cattle.io/id: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/managed-by: {} f:objectset.rio.cattle.io/hash: {} f:spec: f:podManagementPolicy: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:app: {} f:serviceName: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"electrumx"}: .: {} f:env: .: {} k:{"name":"COIN"}: .: {} f:name: {} f:value: {} k:{"name":"DAEMON_URL"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{"containerPort":50002,"protocol":"TCP"}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:resources: .: {} f:limits: .: {} f:memory: {} f:requests: .: {} f:memory: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/data"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} f:updateStrategy: f:rollingUpdate: .: {} f:partition: {} f:type: {} f:volumeClaimTemplates: {} manager: Go-http-client operation: Update time: "2021-02-04T22:21:54Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:field.cattle.io/publicEndpoints: {} manager: rancher operation: Update time: "2021-02-04T22:21:54Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:collisionCount: {} f:currentReplicas: {} f:currentRevision: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updateRevision: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2021-02-04T22:21:56Z" name: electrumx namespace: default resourceVersion: "69597335" selfLink: /apis/apps/v1/namespaces/default/statefulsets/electrumx uid: 60264bca-cc03-442c-ab7c-810af99b4924 spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: electrumx serviceName: electrumx template: metadata: creationTimestamp: null labels: app: electrumx spec: containers: - env: - name: COIN value: BitcoinSegwit - name: DAEMON_URL valueFrom: secretKeyRef: key: DAEMON_URL name: electrumx image: lukechilds/electrumx imagePullPolicy: Always name: electrumx ports: - containerPort: 50002 name: electrumx protocol: TCP resources: limits: memory: 3072M requests: memory: 3072M terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data name: electrumx-data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: electrumx-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: nfs-client volumeMode: Filesystem status: phase: Pending status: collisionCount: 0 currentReplicas: 1 currentRevision: electrumx-79568c448c observedGeneration: 1 readyReplicas: 1 replicas: 1 updateRevision: electrumx-79568c448c updatedReplicas: 1 |
I'm having this exact same issue with 3 PSPs in
|
This blocks proper installation of simple helmcharts for opa-gatekeeper and kube-prometheus-stack. |
Already tried raising this question in slack, and got confirmation that this is not working as expected and should be filed as an issue:
All of the resources are stuck in Modified / Unknown state.
Can anyone explain the reason it is all stuck in this condition and what this error in Conditions tab?
This is my first attempt installing helm chart "bitnami/thanos" ...
Here's a workaround that worked in 2.5.3 and now it's highlighting some of the resources as Modified again, and the GitRepo is stuck in out of sync indefinitely:
This is just crazy what I have to do in order for fleet to go all green, it just doesn't seem right.
I don't overwrite anything in my install, just supplying values via values.yaml .
Noticed for one thing - that whenever leaving
resources:
unspecified/empty - it looks like empty object is automatically converted into {} and it won't match : null default value (or vice-versa) and will complain that "resources" are modified. The rest is funny too - "Service" object expecting for nodePort value, even for service type ClusterIP, each app controller is expecting to have "type" field, etc.Basically this is what I had to do for bitnami/thanos to install properly and turn green in fleet. This seems excessive and I seriously doubt I'm doing things correctly, but the only way I could get it all to validate and work, again - seems a bit strange and most of our workload just wont' work well if we have to create such long of comparison exclusions, even moreso - that they tend to validate differently in future versions of rancher (just like in this particular case, where things kinda started to work in v2.5.3, and stopped working in v2.5.4-rc*
Thank you for any explanation and maybe a roadmap, of how this is actually supposed to work, using my sample
values:
sections, it is easy to reproduce same behavior in any k8s cluster.The text was updated successfully, but these errors were encountered: