Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validating webhook should ignore ingresses with a different ingressclass #7546

Closed
lazybetrayer opened this issue Aug 26, 2021 · 53 comments · Fixed by #8221
Closed

validating webhook should ignore ingresses with a different ingressclass #7546

lazybetrayer opened this issue Aug 26, 2021 · 53 comments · Fixed by #8221
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@lazybetrayer
Copy link

NGINX Ingress controller version: v1.0.0

Kubernetes version (use kubectl version): v1.20.9

Environment: Bare Metal

What happened:

Before v1.0.0, there's a check to skip validating ingress with a different ingressclass:

In v1.0.0, this code is removed. With multiple ingress controllers, both will validate the same ingress.
If we create two ingresses with same host and path but different ingressclasses, the second one will be rejected.

What you expected to happen:

skip validating ingress with a different ingressclass

/kind bug

@lazybetrayer lazybetrayer added the kind/bug Categorizes issue or PR as related to a bug. label Aug 26, 2021
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 26, 2021
@k8s-ci-robot
Copy link
Contributor

@lazybetrayer: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 26, 2021 via email

@longwuyuan
Copy link
Contributor

/remove-kind bug

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 26, 2021
@nick-o
Copy link

nick-o commented Aug 26, 2021

One use case as described here would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of.

This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this but it's quite hard to understand in the first place).

Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken.

Thanks,
Nico

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Aug 26, 2021
@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 27, 2021 via email

@lazybetrayer
Copy link
Author

Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long

On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

In my company, a same domain can be mapped to different IPs, one for internal access and one for external access.
To support this, we deploy two ingress controllers.
Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs.

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 27, 2021 via email

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 27, 2021 via email

@longwuyuan
Copy link
Contributor

You can see this issue #7538 closed as recently as yesterday, where a user is successfully using 2 controllers in one cluster.

I will change this back to kind support for now. If our triaging results in data that proves a bug of some sort, then we can set the bug label again.

/remove-kind bug
/kind support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 27, 2021
@lazybetrayer
Copy link
Author

Ok thanks. So just fyi, 2 controllers in one cluster are a longtime supporter and functioning feature so this is not a bug. Both controllers need to be configured with different ingressClass and the ingress objects need to be configured with a ingressClassName. Thanks, ; Long

On Fri, 27 Aug, 2021, 7:54 AM Wang Zhen, @.> wrote: Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … <#m_6536236180590232772_> On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment) <#7546 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub . In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWTTIQW5OCIDS4HSZTT63ZOJANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

Let me clarify, we run 2 controllers in one cluster successfully. Both controllers are configured with different ingressClass and the ingress objects are configured with a correct ingressClassName. There are no problems with most ingress resources.

ingress classes:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external
spec:
  controller: k8s.io/ingress-nginx-external
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: internal
spec:
  controller: k8s.io/ingress-nginx-internal

controller are running with --controller-class=k8s.io/ingress-nginx-internal/--controller-class=k8s.io/ingress-nginx-external

If we create below resources, test2 will be rejected by validating webhook: Error from server (BadRequest): error when creating "ing.yml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "www.example.com" and path "/" is already defined in ingress default/test1. But this used to work before v1.0.0.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test1
spec:
  ingressClassName: external
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          service:
            name: test
            port:
              number: 1111
        path: /
        pathType: Prefix
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test2
spec:
  ingressClassName: internal
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          service:
            name: test
            port:
              number: 1111
        path: /
        pathType: Prefix

@longwuyuan
Copy link
Contributor

Thank you. This manifest coupled with the initial message clarifies the problem.

We will have to check why that code to validate was changed.

Can you also please confirm that both the manifests in previous message were used to create ingress resources in the same namespace. If yes, it seems you are asking for a feature. can I summarise this as ;

  • 2 Controllers in one cluster
  • 2 ingressclass objects in the same cluster
  • Each controller configured with different ingressclass
  • 2 ingress objects viz ingress-a and ingress-b
  • both ingresses in the same namespace = default
  • both ingresses have the same value for ingress.spec.rules.host = api1.mydomain.com
  • both ingresses have the same value for ingress.spec.rules.http.paths.path = /
  • the 2 ingresses have different values for ingress.spec.ingressClassName

Also, one more request, can you please post the below data to this issue ;

- helm ls -A
- helm -n <ns> get values <releasename>   # for each helm release insalled ont he cluster
- kubectl get all -A -o wide | grep -i ingress
- kubectl describe ingressclasses
- kubectl -n <ingcontrollernamespace> describe po <ingcontrollerpodname>
- kubectl -n <appnamespace> get  ing <ingressname> -o yaml # forboth ingresses

@longwuyuan
Copy link
Contributor

/assign

@LEDfan
Copy link

LEDfan commented Sep 21, 2021

Hi

We are also affected by this issue. I wanted to share our use-case, so that it's clear why this is useful. We are using nginx ingress as "main" ingress to the k8s cluster. Some applications need a more advanced ingress-controllar than what nginx offers and for this we use Skipper. In this case nginx forwards traffic to the Skipper service. Skipper must also be configured using the k8s ingress resources. Therefore you end up with two ingress resources, with the same host and path (and possible in the same namespace) but a different ingress-class.

Thanks for taking care of this issue!

@isavl
Copy link

isavl commented Sep 23, 2021

I also affected by this issue. One way to avoid error on ingress creating/updating is to disable validating hook and remove it from cluster.

@mkreidenweis-schulmngr
Copy link

We've got another use case where it's important that Ingress objects are only validated by the controller for the matching ingressClass.
We want our ingress controller to cache some assets, so we have a basic cache configuration in the ingress controller config:

controller:
  config:
    http-snippet: |
      proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=static-cache:2m max_size=100m inactive=1d use_temp_path=off;

Then in the Ingress objects itself we refer to this cache:

  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_cache           static-cache;
      ...

Our second ingress controller is not configured for caching, but still tries to validate the Ingress objects, leading to the Ingress object being rejected by the ingress controller that will never actually handle this Ingress, because it has a different ingressClassName:

client.go:250: [debug] error updating the resource "test-static-files":
	 cannot patch "test-static-files" with kind Ingress: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: 
-------------------------------------------------------------------------------
Error: exit status 1
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:149
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:150
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg4198530908:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg4198530908:151
2021/10/06 15:59:54 [emerg] 295#295: "proxy_cache" zone "static-cache" is unknown in /tmp/nginx-cfg4198530908:2165
nginx: [emerg] "proxy_cache" zone "static-cache" is unknown in /tmp/nginx-cfg4198530908:2165
nginx: configuration file /tmp/nginx-cfg4198530908 test failed

-------------------------------------------------------------------------------

(Ignore the [warn], this is a separate issue with ingress-nginx, where the config template wasn't updated when upgrading nginx. The [emerg] fails the validation here.)

So it definitely looks like a bug in ingress-nginx to me.

@rblaine95
Copy link

Having the same issue in our dev and production clusters.

We use a split-horizon DNS where we want all ingresses (hosts and paths) accessible over a VPN but only some ingresses accessible to the public.

To achieve this, we've got the same setup as @lazybetrayer - two ingress controllers (nginx-internal, nginx-external) with respective ingress classes, DNS set up so that, if you're on the VPN, you resolve the internal load balancer, else external/public load balancer.

The simplest way to reproduce the issue is:

$ helm create nginx
$ helm install nginx ./nginx -n default
$ kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-internal
  namespace: default
spec:
  ingressClassName: nginx-internal
  rules:
  - host: nginx.example.com
    http:
      paths:
      - backend:
          service:
            name: nginx
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-external
  namespace: default
spec:
  ingressClassName: nginx-external
  rules:
  - host: nginx.example.com
    http:
      paths:
      - backend:
          service:
            name: nginx
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
EOF

Which then returns the error:

ingress.networking.k8s.io/nginx-internal unchanged
Error from server (BadRequest): error when creating "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "nginx.example.com" and path "/" is already defined in ingress default/nginx-internal
### $ kubectl get ingressclasses -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
  kind: IngressClass
  metadata:
    annotations:
      meta.helm.sh/release-name: nginx-external-ingress-controller
      meta.helm.sh/release-namespace: kube-system
    creationTimestamp: "2021-10-01T09:11:05Z"
    generation: 1
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: nginx-external-ingress-controller
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/version: 1.0.3
      helm.sh/chart: ingress-nginx-4.0.5
    name: nginx-external
    resourceVersion: "226814148"
    selfLink: /apis/networking.k8s.io/v1/ingressclasses/nginx-external
    uid: d3afa791-8fa7-473f-a2ae-c398397e6f4a
  spec:
    controller: k8s.io/nginx-external
- apiVersion: networking.k8s.io/v1
  kind: IngressClass
  metadata:
    annotations:
      ingressclass.kubernetes.io/is-default-class: "true"
      meta.helm.sh/release-name: nginx-internal-ingress-controller
      meta.helm.sh/release-namespace: kube-system
    creationTimestamp: "2021-10-01T09:11:06Z"
    generation: 1
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: nginx-internal-ingress-controller
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/version: 1.0.3
      helm.sh/chart: ingress-nginx-4.0.5
    name: nginx-internal
    resourceVersion: "226814176"
    selfLink: /apis/networking.k8s.io/v1/ingressclasses/nginx-internal
    uid: 449d39ee-2990-4d55-abbc-9b750795b919
  spec:
    controller: k8s.io/nginx-internal
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@mkreidenweis-schulmngr
Copy link

@rikatz Could you maybe have a look at this issue, also considering the discussion about your changes here: https://github.com/kubernetes/ingress-nginx/pull/7341/files/f5c4dc299c2622dcc0e8ff038dac9cc4b0f4fcbb#diff-4198ec010671801881244a8052177f31bcbc682c99fbd7391bceb136025c0568

Can we have this issue classified as a bug again, please? :-)

@mkreidenweis-schulmngr
Copy link

/kind bug

@pdefreitas
Copy link

@rikatz This was merged but it wasn't released as part of 4.0.17. Any chance we can have it released?

@tao12345666333
Copy link
Member

We can make a release

@rikatz
Copy link
Contributor

rikatz commented Feb 26, 2022

@tao12345666333 do you wanna start the release process?

@tao12345666333
Copy link
Member

Yes, I'll handle it

vijay-veeranki added a commit to ministryofjustice/cloud-platform-terraform-ingress-controller that referenced this issue Mar 11, 2022
vijay-veeranki added a commit to ministryofjustice/cloud-platform-terraform-ingress-controller that referenced this issue Mar 11, 2022
vijay-veeranki added a commit to ministryofjustice/cloud-platform-terraform-ingress-controller that referenced this issue Mar 11, 2022
vijay-veeranki added a commit to ministryofjustice/cloud-platform-infrastructure that referenced this issue Mar 11, 2022
 This version got fix for this:
 kubernetes/ingress-nginx#7546

  Condition to enable external-dns annotation for svc
vijay-veeranki added a commit to ministryofjustice/cloud-platform-infrastructure that referenced this issue Mar 11, 2022
This version got fix for this:
 kubernetes/ingress-nginx#7546

  Condition to enable external-dns annotation for svc
vijay-veeranki added a commit to ministryofjustice/cloud-platform-terraform-ingress-controller that referenced this issue Mar 11, 2022
@parkwart
Copy link

parkwart commented Mar 11, 2022

tested it with chart version 4.0.18, the controller with validating webhooks did still process the one it shouldn't 😢

my setup:

two controllers with ingress-classes configured: nginx and nginx-private

nginx-controller - admission webhook enabled
nginx-private-controller - admission webhook disabled

created ingress for class nginx-private, logs showed that nginx-controller was rejecting it, nginx-private-controller accepting. however on nginx-controller the log stated that admission-webhook will accept this ing-object. then the ingress was stuck no LB assigned.

workaround: disabled admission webhooks on both controllers it seems to work now.

@jsalatiel
Copy link

It works for me. I will double check if I have both admission webhooks enabled just in case.

@rblaine95
Copy link

Can confirm that our setup (#7546 (comment)) after upgrading to 1.1.2 (Chart v4.0.18) and enabling admission webhooks, is working exactly as expected.

Unless we use the deprecated kubernetes.io/ingress.class annotation.
But that annotation is deprecated, so I expect that.

@parkwart
Copy link

parkwart commented Mar 11, 2022

the annotation kubernetes.io/ingress.class is for legacy reason enabled on the controllers. as soon as all ingresses are migrated gonna remove the deprecated annotation feature and check it again, thx!


update 01-06-22:

we had both ingress-controllers running in the same namespace, only the controller which was started first was able to bind the loadbalancer to the ingress-object

we did some testing and after separating the controllers into their own namespaces (nginx-private, nginx-public), everything works like a charm.


update 08-06-22:
if controllers are running in the same namespace, you must also set "electionID" in the helm chart values to some unique value per controller, then it also works. otherwise the controller which is started 2nd will be just a slave to the 1st one.

@jseparovic
Copy link

We've got another use case where it's important that Ingress objects are only validated by the controller for the matching ingressClass.

@mkreidenweis-schulmngr Do you know if there is a workaround for this? I'm seeing this issue just now where I have created a second controller to bypass our CDN so we can use mTLS directly. I'm relying on http headers set by nginx in a http-snippet but getting the same error as you've described:

Error from server (BadRequest): error when creating "api-mtls-ingress.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
-------------------------------------------------------------------------------
...
nginx: [emerg] unknown "ssl_client_s_dn_cn" variable

My configmap is setup in the same namespace as the 2nd contriller:

kind: ConfigMap
apiVersion: v1
metadata:
    name: ingress-nginx-controller-9443
    namespace: ingress-nginx-9443
    labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
data:
    http-snippet: |
        map  $ssl_client_s_dn  $ssl_client_s_dn_cn {
            default "";
            ~CN=(?<CN>[^/,\"]+) $CN;
        }
        
        map  $ssl_client_s_dn  $ssl_client_s_dn_ou {
          default "";
          ~OU=(?<OU>[^/,\"]+) $OU;
        }
        
        map  $ssl_client_s_dn  $ssl_client_s_dn_dc {
          default "";
          ~DC=(?<DC>[^/,\"]+) $DC;
        }
        
        map  $ssl_client_s_dn  $ssl_client_s_dn_o {
          default "";
          ~O=(?<O>[^/,\"]+) $O;
        }
        
        map  $ssl_client_s_dn  $ssl_client_s_dn_c {
          default "";
          ~C=(?<C>[^/,\"]+) $C;
        }
        
        map  $ssl_client_s_dn  $ssl_client_s_dn_uuid {
          default "";
          ~UUID=(?<UUID>[^/,\"]+) $UUID;
        }    

It seems this issue persists. Am I missing some configuration to get around it?

@longwuyuan
Copy link
Contributor

@jseparovic is it possible for you to provide a step by step instructions process that someone can use on their minikube clsuter or a kind cluster, and reproduce this problem.

@jseparovic
Copy link

@longwuyuan Yep no worries. I'll update this comment once I've put something together

@jseparovic
Copy link

@longwuyuan I cannot seem to reproduce this another cluster, and since the original cluster was reconfigured to not require the configmap on the secondary controller.
Cheers

@longwuyuan
Copy link
Contributor

@longwuyuan I cannot seem to reproduce this another cluster, and since the original cluster was reconfigured to not require the configmap on the secondary controller. Cheers

thanks for updating

@MarkKharitonov
Copy link

MarkKharitonov commented Jul 12, 2023

I am observing this issue or something else masquerading as it:

~$ k get ing
NAME               CLASS            HOSTS                                                                 ADDRESS        PORTS   AGE
toolbox-external   nginx-external   chip-eastus2-e.np.dayforcehcm.com                                     20.***   80      61m
toolbox-internal   nginx-internal   chip-eastus2-i.np.dayforcehcm.com,chip-eastus2-e.np.dayforcehcm.com   10.***   80      61m

~$ helm get manifest toolbox | k apply -f-
...
Error from server (BadRequest): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"nginx.ingress.kubernetes.io/rewrite-target\":\"/external/$2\"},\"labels\":{\"app\":\"toolbox\"},\"name\":\"toolbox-external\",\"namespace\":\"chip\"},\"spec\":{\"ingressClassName\":\"nginx-external\",\"rules\":[{\"host\":\"chip-eastus2-e.np.dayforcehcm.com\",\"http\":{\"paths\":[{\"backend\":{\"service\":{\"name\":\"toolbox\",\"port\":{\"number\":80}}},\"path\":\"/toolbox(/|$)(.*)\",\"pathType\":\"Prefix\"}]}}]}}\n"},"creationTimestamp":null,"generation":null,"resourceVersion":null,"uid":null},"status":null}
to:
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "toolbox-external", Namespace: "chip"
for: "STDIN": error when patching "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "chip-eastus2-e.np.dayforcehcm.com" and path "/toolbox(/|$)(.*)" is already defined in ingress chip/toolbox-internal

~$ k delete ing toolbox-internal
ingress.networking.k8s.io "toolbox-internal" deleted

~$ helm get manifest toolbox | k apply -f-
service/toolbox unchanged
deployment.apps/toolbox unchanged
deployment.apps/toolbox-secret-sync-csi unchanged
deployment.apps/toolbox-secret-sync-test unchanged
deployment.apps/toolbox-secret-thru-env-test unchanged
ingress.networking.k8s.io/toolbox-external configured
ingress.networking.k8s.io/toolbox-internal created
azurekeyvaultsecret.spv.no/dummy-secret-sync unchanged
azurekeyvaultsecret.spv.no/dummy-secret-thru-env unchanged
secretproviderclass.secrets-store.csi.x-k8s.io/toolbox unchanged

~$

Deleting the internal ingress allows for the external ingress to be modified and then internal ingress is created and it does not have a problem. The code runs in a deployment pipeline.

Here are the ingress definitions:

~$ helm get manifest toolbox | grep -A31 -B3 ': Ingress'
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    app: toolbox
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /external/$2
  name: toolbox-external
  namespace: chip
spec:
  ingressClassName: nginx-external
  rules:
    - host: chip-eastus2-e.np.dayforcehcm.com
      http:
        paths:
          - path: /toolbox(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: toolbox
                port:
                  number: 80
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    app: toolbox
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /internal/$2
  name: toolbox-internal
  namespace: chip
spec:
  ingressClassName: nginx-internal
  rules:
    - host: chip-eastus2-i.np.dayforcehcm.com
      http:
        paths:
          - path: /toolbox(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: toolbox
                port:
                  number: 80
    - host: chip-eastus2-e.np.dayforcehcm.com
      http:
        paths:
          - path: /toolbox(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: toolbox
                port:
                  number: 80
---

~$

The nginx version:

~$ helm ls -n internal-nginx-ingress
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
internal-nginx-ingress  internal-nginx-ingress  19              2023-03-30 15:05:19.692571633 +0000 UTC deployed        ingress-nginx-4.5.2     1.6.4

~$ helm ls -n external-nginx-ingress
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
external-nginx-ingress  external-nginx-ingress  3               2023-06-26 17:06:41.916257728 +0000 UTC deployed        ingress-nginx-4.5.2     1.6.4

~$

Please, let me know what else I can provide to help you to help me.
Thank you.

@afirth
Copy link
Member

afirth commented Jul 12, 2023

@MarkKharitonov did you follow these steps already? if so, and they resolve the issue, it would be great if you could PR some docs or at least open a docs issue

  • controllers for class X and Y running in separate namespaces
  • electionID set to unique value on each class controller set

@MarkKharitonov
Copy link

@afirth - Are these steps relevant if the ingresses live in different namespaces, as is my case? It was my understanding from reading that comment that having ingresses in different namespaces eliminates all the issues naturally.

Have I missed anything?

@MarkKharitonov
Copy link

This is false alarm. I have found the root cause. It is actually educational.

We copied the nginx images to our internal image repo in Azure and never refreshed it while we did refresh the HELM chart deployed.

This is a lesson for us to improve this rather broken process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet