-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Description
Hello,
I am testing multiple ingress controllers on one cluster.
My wish:
- one default ingress controller that manage a default ingress class or without ingressclass defined
- at least one other controller that manage a specific ingress class
I read with attention: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I deploy the default ingress controller using helm chart:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: ingress-nginx
namespace: kube-system
spec:
chart: ingress-nginx
targetNamespace: kube-system
repo: https://kubernetes.github.io/ingress-nginx/
version: 4.12.3
valuesContent: |
controller:
replicaCount: 1
ingressClassResource:
enabled: true
default: true
service:
type: LoadBalancer
And another controller with his own class:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: ingress-nginx-domain-a
namespace: kube-system
spec:
chart: ingress-nginx
targetNamespace: kube-system
repo: https://kubernetes.github.io/ingress-nginx/
version: 4.12.3
valuesContent: |
controller:
replicaCount: 1
ingressClassResource:
name: domain-a
enabled: true
default: false
controllerValue: k8s.io/domain-a
ingressClass: domain-a
service:
type: LoadBalancer
Each controller has his own IP deliver by an external load balancer.
Then I create two ingresses:
- one with ingressClassName: nginx
- the second with ingressClassName: domain-a
What you expected to happen:
Each ingress should obtain the ipaddress of the ingress-controller (that will be update in our global DNS using external-dns)
What happened:
Ingresses have the correct IP on first seconds then it quickly flip flap with nodes IP:
kubectl get ingress -n default
NAME CLASS HOSTS ADDRESS PORTS AGE
whoami nginx whoami.test.lan 192.168.1.192 80 16h
whoami-a domain-a whoami-a.test.lan 192.168.1.194 80 16h
NAME CLASS HOSTS ADDRESS PORTS AGE
whoami nginx whoami.test.lan 192.168.1.19,192.168.1.20 80 16h
whoami-a domain-a whoami-a.test.lan 192.168.1.194 80 16h
192.168.1.192/194 are IP of ingress controllers so a good value.
192.168.1.19/20 are my nodes IP
Only the ingress with class nginx is flipflap.
**What do you think went wrong? **:
Each time the wrong value is set, I have this log on controller domain-A:
W0606 07:47:14.456797 8 controller.go:336] ignoring ingress whoami in default based on annotation : no object matching key "nginx" in local store
I0606 07:47:14.456900 8 main.go:107] "successfully validated configuration, accepting" ingress="default/whoami"
Then after few seconds, the default controller fix the ipaddress to the correct one and it repeat again and again...
NGINX Ingress controller version: 1.12.3 (using 4.12.3 helm chart)
Kubernetes version (use kubectl version
): v1.29.15+rke2r1
Environment:
-
Cloud provider or hardware configuration: RKE2 using Rancher
-
OS (e.g. from /etc/os-release): Rocky9
-
How was the ingress-nginx-controller installed: Helm chart (not RKE2 version)
Maybe, I do a wrong config.... Thanks for your help !
Metadata
Metadata
Assignees
Labels
Type
Projects
Status