Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment is still progressing since Ingress resource is not considered as synced #1704

Closed
augabet opened this issue Jun 6, 2019 · 29 comments
Labels
bug Something isn't working

Comments

@augabet
Copy link
Contributor

augabet commented Jun 6, 2019

Deployment is still progressing since Ingress resource is not considered as synced

Describe the bug
We deploy an application in an "On Premise" Kubernetes using nginx Ingress. When deploying an application through Argo, the ingress object is still syncing although the application is available behind the Ingress.
Consequently, our deployment is still progressing in Argo.

We think that this is related to that PR https://github.com/argoproj/argo-cd/pull/1053/files#diff-d5a0105b0157a44898f0cd002d7d827dR157 and that issue #997 .

Our ingress controller is nginx and each ingress has the following status (which is a healthy status according to us):

...
status:
  loadBalancer: {}

To Reproduce

  1. Deploy an application with an ingress on a Kubernetes cluster using an internal ingress controller nginx (an on premise cluster for example)
  2. yaml of the ingress:
  kind: Ingress
  metadata:
    name: my-ingress
    namespace: myns
  spec:
    rules:
    - host: whatever.example.com
      http:
        paths:
        - backend:
            serviceName: my-service
            servicePort: 80
    tls:
    - hosts:
      - whatever.example.com
  1. See error

Expected behavior
The ingress should be deployed in the Kubernetes cluster, and still be syncing in Argo. The Application deployment should still be progressing.

Screenshots
If applicable, add screenshots to help explain your problem.

Version

argocd: v1.0.1+5fe1447.dirty
  BuildDate: 2019-05-28T17:26:35Z
  GitCommit: 5fe1447b722716649143c63f9fc054886d5b111c
  GitTreeState: dirty
  GoVersion: go1.11.4
  Compiler: gc
  Platform: linux/amd64
argocd-server: v1.0.1+5fe1447.dirty
  BuildDate: 2019-05-28T17:27:38Z
  GitCommit: 5fe1447b722716649143c63f9fc054886d5b111c
  GitTreeState: dirty
  GoVersion: go1.11.4
  Compiler: gc
  Platform: linux/amd64
  Ksonnet Version: 0.13.1

Logs

/tmp/argocd-linux-amd64 app list
NAME               CLUSTER                         NAMESPACE             PROJECT  STATUS  HEALTH       SYNCPOLICY  CONDITIONS
white-application  https://kubernetes.default.svc  whiteapp-development  default  Synced  Progressing  <none>      <none>


/tmp/argocd-linux-amd64 app get white-application
Name:               white-application
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          whiteapp-development
URL:                https://cd.devops.caas.cagip.group.gca/applications/white-application
Repo:               https://scm.saas.cagip.group.gca/cagip/devops/tools-dashboard
Target:             HEAD
Path:               deploy
Sync Policy:        <none>
Sync Status:        Synced to HEAD (2b1892f)
Health Status:      Progressing

GROUP       KIND        NAMESPACE             NAME             STATUS  HEALTH
apps        Deployment  whiteapp-development  tools-dashboard  Synced  Healthy
extensions  Ingress     whiteapp-development  tools-dashboard  Synced  Progressing
            Service     whiteapp-development  tools-dashboard  Synced  Healthy

Have you thought about contributing a fix yourself?

Yes :)

Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.

If you want to get involved, check out the
contributing guide, then reach out to us on Slack so we can see how to get you started.

@augabet augabet added the bug Something isn't working label Jun 6, 2019
@alexec
Copy link
Contributor

alexec commented Jun 6, 2019

Please see. https://argoproj.github.io/argo-cd/faq/

You can use this for the Ingress (thank you @stevesea )

 resource.customizations: |
    extensions/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

@augabet
Copy link
Contributor Author

augabet commented Jun 6, 2019

This is perfect, many thanks :) @stevesea !

@augabet augabet closed this as completed Jun 6, 2019
@zetsub0u
Copy link

I'm facing the same issue, can you explain where you set this block @alexec ? is this in the argo configmap?

@stevesea
Copy link
Contributor

yes, it's in the argocd-cm configmap. Here's the relevant section of the documentation: https://argoproj.github.io/argo-cd/operator-manual/health/#way-1-define-a-custom-health-check-in-argocd-cm-configmap

@zetsub0u
Copy link

Huh, i don't seem to be able to get this to work, i see the snippet i added in the argocd-cm configmap correctly, i rebooted the argocd-server pods and im still seeing my ingress as progressing. What am i missing? This is with 1.5.1.

@al26p
Copy link

al26p commented Apr 30, 2020

Me either, I don't know how to troubleshoot this. I added the resource customization, reloaded all pods related to argoCD.

I'm using HA-Proxy's ingress controller and Kubernetes' Ingress (not some fancy custom resource).

Kubernetes v1.17.0
ArgoCD v1.5.0+bdda410
argocd-cm :

data:
  resource.customizations: |
    Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

I tried unsuccessfully with Ingress and extensions/Ingress.

@jannfis
Copy link
Member

jannfis commented Apr 30, 2020

Hi. You need to make sure to apply the resource customization to the correct resource type.

I think recent versions of Kubernetes moved the Ingress resource type from the extensions API to the networking.k8s.io API, so what was

 resource.customizations: |
    extensions/Ingress:

before should now be

 resource.customizations: |
    networking.k8s.io/Ingress:

@al26p
Copy link

al26p commented Apr 30, 2020

Hi,
You're right ! It now works :)
Thanks a lot for your quick answer !

@bbqtk
Copy link

bbqtk commented May 15, 2020

Encountered the same problem, the above method can solve this problem, but there is a new problem. can not delete the application version 1.5.4

W4lspirit added a commit to W4lspirit/kubernetes-training that referenced this issue Apr 11, 2021
@enys
Copy link

enys commented Aug 24, 2021

For Traefik users : https://doc.traefik.io/traefik/providers/kubernetes-ingress/#publishedservice
And this a boolean option in the helm chart.

@fernferret
Copy link

fernferret commented Nov 27, 2021

@enys Thanks for the great tip, I've ended up at this issue several times, but for those of us traefik users yours is the correct answer (as it actually fixes the issue rather than ignoring it). The documentation you linked is absolutely correct but when I read it I was still a bit confused so here are a few details for others that might stumble upon this issue.

The traefik publishedService config is applied to the kubernetesingress "controller". This flag tells traefik what IP address it is using for its ingress so that it can correctly update standard kubernetes Ingress objects with the IP address used for ingress. This can be seen in the source here

I was thinking to myself when first reading the documentation that I'd need to set that flag to the service that each downstream ingress object targeted, but the reality is that it just needs to be set to the namespace/service of traefik itself. As you noted it's very easy via the helm chart:

providers:
  kubernetesIngress:
    publishedService:
      enabled: true

This ends up just adding the following flag (assuming traefik is installed in the ingress-ns namespace and the traefik service is named traefik-svc) to traefik.

--providers.kubernetesingress.ingressendpoint.publishedservice=ingress-ns/traefik-svc

@enys
Copy link

enys commented Nov 27, 2021

@fernferret thanks and yes. Your write up here could be added to the docs !

@benr
Copy link

benr commented Dec 28, 2021

For K3S Users that should come across this that are confused about how to actually implement the fix above, edit /var/lib/rancher/k3s/server/manifests/traefik.yaml and add the above-provided lines, so that it looks roughly like this:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: traefik
  namespace: kube-system
spec:
  chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
  valuesContent: |-
    rbac:
      enabled: true
.....
    kubernetes:
      ingressEndpoint:
        useDefaultPublishedService: true
    providers:
      kubernetesIngress:
        publishedService:
          enabled: true
.....

Then restart K3S. You should then look at the config map ('kubectl -n kube-system get cm/traefik -o yaml') and you'll see these lines added:

    [kubernetes]
      [kubernetes.ingressEndpoint]
      publishedService = "kube-system/traefik"

Thanks @enys and @fernferret for the solution!

@dalei2019
Copy link

For helm user, add this into traefik override.yaml:

additionalArguments:
  - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik"

And upgrade traefik use helm.

@dinesh0314
Copy link

For helm user, add this into traefik override.yaml:

additionalArguments:
  - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik"

And upgrade traefik use helm.

for non helm charts.. Where we can add this in deploy file of traefik ? and let how to use this inside deploy file

@dalei2019
Copy link

For helm user, add this into traefik override.yaml:

additionalArguments:
  - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik"

And upgrade traefik use helm.

for non helm charts.. Where we can add this in deploy file of traefik ? and let how to use this inside deploy file

It's easy to find .

# helm upgrade -f override.yaml traefik traefik-10.9.1.tgz -n traefik --dry-run  |grep -B 100 publishedservice
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  labels:
    app.kubernetes.io/name: traefik
    helm.sh/chart: traefik-10.9.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: traefik
  annotations:
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: traefik
      app.kubernetes.io/instance: traefik
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 0
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: "/metrics"
        prometheus.io/port: "9100"
      labels:
        app.kubernetes.io/name: traefik
        helm.sh/chart: traefik-10.9.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/instance: traefik
    spec:
      serviceAccountName: traefik
      terminationGracePeriodSeconds: 60
      hostNetwork: false
      containers:
      - image: "traefik:2.5.6"
      ...
        args:
        ...
          - "--serverstransport.insecureskipverify=true"
          - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik"

@Joseph94m
Copy link

For those who could still find this thread in 2023:
In my values.yaml file, i added:

configs:
  cm:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

It works now!

@jdiemke
Copy link

jdiemke commented May 16, 2023

Shouldn't this be the default behavior? I dont think it makes sense that everyone has to adjust config maps.

@emuchogu
Copy link

emuchogu commented Jun 24, 2023

resource.customizations:

configs:
  cm:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

It's 2023 and I still have this issue.

Is this put in the argocd values.yaml?
Where should I put this entry in the values.yaml file?

Can you provide a step by step guide for all future sufferers...

@sxyandapp
Copy link

resource.customizations:

configs:
  cm:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

It's 2023 and I still have this issue.

Is this put in the argocd values.yaml? Where should I put this entry in the values.yaml file?

Can you provide a step by step guide for all future sufferers...

直接修改这个configmap:argocd-cm

apiVersion: v1
data:
  resource.customizations: |
    networking.k8s.io/Ingress:
      health.lua: |
        hs = {}
        hs.status = "Healthy"
        return hs
 
kind: ConfigMap
metadata:
  name: argocd-cm

改完后argocd会自动生效

@emuchogu
Copy link

emuchogu commented Jun 25, 2023

resource.customizations:

configs:
  cm:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs

It's 2023 and I still have this issue.
Is this put in the argocd values.yaml? Where should I put this entry in the values.yaml file?
Can you provide a step by step guide for all future sufferers...

直接修改这个configmap:argocd-cm

apiVersion: v1
data:
  resource.customizations: |
    networking.k8s.io/Ingress:
      health.lua: |
        hs = {}
        hs.status = "Healthy"
        return hs
 
kind: ConfigMap
metadata:
  name: argocd-cm

改完后argocd会自动生效

This worked great.
Translation:

Modify this configmap directly: argocd-cm

apiVersion: v1
data:
  resource.customizations: |
    networking.k8s.io/Ingress:
      health.lua: |
        hs = {}
        hs.status = "Healthy"
        return hs
 
kind: ConfigMap
metadata:
  name: argocd-cm

argocd will automatically take effect after the modification

@adv4000
Copy link

adv4000 commented Sep 5, 2023

I using Helm Chart to deploy ArgoCD and I have nginx Ingress contoroller, I used this value.yaml file to fix this issue:

# Fixing issue with Stuck Processing for Ingress resource
server:
  config:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs        

After update, just Resync your applications Ingress resource.

@fatsolko
Copy link

fatsolko commented Apr 1, 2024

I using Helm Chart to deploy ArgoCD and I have nginx Ingress contoroller, I used this value.yaml file to fix this issue:

# Fixing issue with Stuck Processing for Ingress resource
server:
  config:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs        

After update, just Resync your applications Ingress resource.

I added that in ConfigMap argocd-cm and reload all pods with argoCD

data:
  resource.customizations: |
    networking.k8s.io/Ingress:
      health.lua: |
        hs = {}
        hs.status = "Healthy"
        return hs  

but it doesn't help, ingress-nginx-controller still processing

@adv4000
Copy link

adv4000 commented Apr 1, 2024

@dlsniper
Copy link

I'm sorry to revive this "obvious" thread, but I spent more than 2 hours trying to understand what I'm doing wrong and failed.

I'm using the latest stable version of Argo CD, v2.11.0, with the latest stable version of Argo Rollouts, v1.6.6, and trying this example: https://argoproj.github.io/argo-rollouts/getting-started/nginx/ with the NGINX Ingress Controller v1.10.1 installed via Helm.

I manually created the application in Argo CD and then bumped into the same issue as others in this thread.

I've read the documentation at https://argo-cd.readthedocs.io/en/stable/faq/#why-is-my-application-stuck-in-progressing-state

I also tried to apply the suggested fixes above but the issue persists.

Since this appears to be a recurring problem, can we please get a clear documentation on how to fix this and update the examples in the documentation with how to fix this too?

I'd be happy to send PRs to the documentation with the fix, but I don't know how to fix this in the first place :/

Screenshot_20240515_234619

@adv4000
Copy link

adv4000 commented May 15, 2024

Hi @dlsniper

See how I deployed ArgoCD using Terraform, working fine for me: https://github.com/adv4000/argocd-terraform/tree/main/terraform_argocd_eks

@dlsniper
Copy link

dlsniper commented May 15, 2024

@adv4000 thank you!

When using the latest version of the Helm chart, 6.9.2, the values.yaml file for the Helm chart should look something like this:

[...]
configs:
  cm:
    resource.customizations: |
      networking.k8s.io/Ingress:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          return hs
[...]

Thanks to argoproj/argo-helm#1872 (comment) I can now report that Argo CD works as expected.

@zyfyy
Copy link

zyfyy commented Jun 5, 2024

@enys Thanks for the great tip, I've ended up at this issue several times, but for those of us traefik users yours is the correct answer (as it actually fixes the issue rather than ignoring it). The documentation you linked is absolutely correct but when I read it I was still a bit confused so here are a few details for others that might stumble upon this issue.

The traefik publishedService config is applied to the kubernetesingress "controller". This flag tells traefik what IP address it is using for its ingress so that it can correctly update standard kubernetes Ingress objects with the IP address used for ingress. This can be seen in the source here

I was thinking to myself when first reading the documentation that I'd need to set that flag to the service that each downstream ingress object targeted, but the reality is that it just needs to be set to the namespace/service of traefik itself. As you noted it's very easy via the helm chart:

providers:
  kubernetesIngress:
    publishedService:
      enabled: true

This ends up just adding the following flag (assuming traefik is installed in the ingress-ns namespace and the traefik service is named traefik-svc) to traefik.

--providers.kubernetesingress.ingressendpoint.publishedservice=ingress-ns/traefik-svc

great job! Just very helpful for traefik users!

@thiagowfx
Copy link

The

providers:
  kubernetesIngress:
    publishedService:
      enabled: true

solution did not work for me. Despite verifying that it is applied in the traefik deployment, k get ingress -n argocd -o yaml still has:

  status:
    loadBalancer: {}

...therefore the health check remains stuck at "Processing".

The custom health check solution ("resource.customizations") worked.

Let me know if you have any other ideas on how to debug the publishedService solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests