Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resource.customizations getting deleted from argocd-cm configmap #5056

Closed
2 of 3 tasks
innlouvate opened this issue Dec 14, 2020 · 2 comments
Closed
2 of 3 tasks

resource.customizations getting deleted from argocd-cm configmap #5056

innlouvate opened this issue Dec 14, 2020 · 2 comments
Labels
bug Something isn't working

Comments

@innlouvate
Copy link

If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a question in argocd slack channel.

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version - not using cli locally

Describe the bug

Applying resource.customizations via argocd-cm configmap only works for a minute or two before being removed from the configmap and returning to having no health check customisations. The resource.customizations key is left in the configmap with an emtpy value.

I had seen some discussion about this type of issue previously but it looked like it had been closed as fixed(?)

Using argocd v1.5.2 deployed via argocd operator v 0.0.8 on openshift 3.11
Although also tested against, argocd v1.7.10 deployed via argocd operator v 0.0.15 on openshift 3.11 and shows the same results.

To Reproduce

Deploy argocd (using operator and associated declarative yaml files for roles and configmaps) with argocd-cm configmap similar to the below and observe that the value is reset/removed after a few minutes. Can also be triggered by restarting the argocd application controller.

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: “argocd”
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  resource.customizations: |
    apps.openshift.io/DeploymentConfig:
      health.lua: |
        hs = {}
        if obj.status ~= nil then
          if obj.status.conditions ~= nil then
            for i, condition in ipairs(obj.status.conditions) do
              if condition.type == "Available" and condition.status == "False" then
                hs.status = "Degraded"
                hs.message = condition.message
                return hs
              end
              if condition.type == "Available" and condition.status == "True" then
                hs.status = "Healthy"
                hs.message = condition.message
                return hs
              end
            end
          end
        end
        hs.status = "Progressing"
        hs.message = "Waiting for rollout"
        return hs

Expected behavior

resource.customizations are not deleted from argocd-cm configmap

Logs

Below logs from argocd application controller picking up the custom health check, then when manually refreshed again dropping it.

time="2020-12-14T10:55:57Z" level=info msg="Cluster successfully synced" server="https://kubernetes.default.svc "
--
  | time="2020-12-14T10:55:58Z" level=info msg="getRepoObjs stats" application=test-app build_options_ms=0 helm_ms=0 manifests_ms=375 plugins_ms=0 repo_ms=0 time_ms=2813 unmarshal_ms=0 version_ms=2437
  | time="2020-12-14T10:55:58Z" level=info msg="Skipping auto-sync: application status is Synced" application=test-app
  | time="2020-12-14T10:55:58Z" level=info msg="Updated health status: Healthy -> Degraded" application=test-app dest-namespace=dev-apps dest-server="https://kubernetes.default.svc " reason=ResourceUpdated type=Normal
  | time="2020-12-14T10:55:58Z" level=info msg="Update successful" application=test-app
  | time="2020-12-14T10:55:58Z" level=info msg="Reconciliation completed" application=test-app dedup_ms=0 dest-namespace=dev-apps dest-server="https://kubernetes.default.svc " diff_ms=3 fields.level=2 git_ms=2813 health_ms=0 live_ms=0 settings_ms=0 sync_ms=0 time_ms=2840
  | time="2020-12-14T10:55:58Z" level=info msg="Refreshing app status (controller refresh requested), level (1)" application=apps
  | time="2020-12-14T10:55:58Z" level=info msg="Comparing app state (cluster: https://kubernetes.default.svc , namespace: agent-argo)" application=apps
  | time="2020-12-14T10:55:58Z" level=info msg="getRepoObjs stats" application=apps build_options_ms=0 helm_ms=0 manifests_ms=8 plugins_ms=0 repo_ms=0 time_ms=8 unmarshal_ms=0 version_ms=0
  | time="2020-12-14T10:55:58Z" level=info msg="Updated health status: Healthy -> Degraded" application=apps dest-namespace=agent-argo dest-server="https://kubernetes.default.svc " reason=ResourceUpdated type=Normal
  | time="2020-12-14T10:55:58Z" level=info msg="Update successful" application=apps
  | time="2020-12-14T10:55:58Z" level=info msg="Reconciliation completed" application=apps dedup_ms=0 dest-namespace=agent-argo dest-server="https://kubernetes.default.svc " diff_ms=2 fields.level=1 git_ms=8 health_ms=0 live_ms=0 settings_ms=0 sync_ms=0 time_ms=26
  | time="2020-12-14T10:56:00Z" level=info msg="Notifying 1 settings subscribers: [0xc0008c2780]"
  | time="2020-12-14T10:56:00Z" level=info msg="invalidating live state cache"
  | time="2020-12-14T10:56:00Z" level=warning msg="invalidated cluster" server="https://kubernetes.default.svc "
  | time="2020-12-14T10:56:00Z" level=info msg="live state cache invalidated"
  | time="2020-12-14T10:56:29Z" level=info msg="Refreshing app status (normal refresh requested), level (2)" application=test-app
  | time="2020-12-14T10:56:29Z" level=info msg="Comparing app state (cluster: https://kubernetes.default.svc , namespace: dev-apps)" application=test-app
  | time="2020-12-14T10:56:29Z" level=info msg="Start syncing cluster" server="https://kubernetes.default.svc "
  | time="2020-12-14T10:56:31Z" level=info msg="Cluster successfully synced" server="https://kubernetes.default.svc "
  | time="2020-12-14T10:56:31Z" level=info msg="getRepoObjs stats" application=test-app build_options_ms=0 helm_ms=0 manifests_ms=402 plugins_ms=0 repo_ms=0 time_ms=2645 unmarshal_ms=0 version_ms=2241
  | time="2020-12-14T10:56:31Z" level=info msg="Skipping auto-sync: application status is Synced" application=test-app
  | time="2020-12-14T10:56:31Z" level=info msg="Updated health status: Degraded -> Healthy" application=test-app dest-namespace=dev-apps dest-server="https://kubernetes.default.svc " reason=ResourceUpdated type=Normal
  | time="2020-12-14T10:56:31Z" level=info msg="Update successful" application=test-app
  | time="2020-12-14T10:56:31Z" level=info msg="Reconciliation completed" application=test-app dedup_ms=0 dest-namespace=dev-apps dest-server="https://kubernetes.default.svc " diff_ms=3 fields.level=2 git_ms=2645 health_ms=0 live_ms=1 settings_ms=0 sync_ms=0 time_ms=2668
  | time="2020-12-14T10:56:31Z" level=info msg="Refreshing app status (controller refresh requested), level (1)" application=apps
  | time="2020-12-14T10:56:31Z" level=info msg="Comparing app state (cluster: https://kubernetes.default.svc , namespace: agent-argo)" application=apps
  | time="2020-12-14T10:56:31Z" level=info msg="getRepoObjs stats" application=apps build_options_ms=0 helm_ms=0 manifests_ms=8 plugins_ms=0 repo_ms=0 time_ms=8 unmarshal_ms=0 version_ms=0
  | time="2020-12-14T10:56:31Z" level=info msg="Updated health status: Degraded -> Healthy" application=apps dest-namespace=agent-argo dest-server="https://kubernetes.default.svc " reason=ResourceUpdated type=Normal
  | time="2020-12-14T10:56:31Z" level=info msg="Update successful" application=apps
  | time="2020-12-14T10:56:31Z" level=info msg="Reconciliation completed" application=apps dedup_ms=0 dest-namespace=agent-argo dest-server="https://kubernetes.default.svc " diff_ms=2 fields.level=1 git_ms=8 health_ms=0 live_ms=1 settings_ms=0 sync_ms=0 time_ms=31

@innlouvate innlouvate added the bug Something isn't working label Dec 14, 2020
@innlouvate
Copy link
Author

Resolved: conflict between the operator and ported setup

@nastacio
Copy link

nastacio commented May 5, 2021

I am seeing the same problem with OpenShift 4.6 + GitOps operator 1.0.0 (Argo CD 1.8.4) and OpenShift 4.7 + GitOps operator 1.1.0 (ArgoCD 2.0.0)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants