Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot expose nginx-ingress with LoadBalancer Service on several Exoscale clusters #8374

Closed
lucj opened this issue Mar 22, 2022 · 3 comments · Fixed by #8365
Closed

Cannot expose nginx-ingress with LoadBalancer Service on several Exoscale clusters #8374

lucj opened this issue Mar 22, 2022 · 3 comments · Fixed by #8365
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@lucj
Copy link
Contributor

lucj commented Mar 22, 2022

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

This issue is not about the nginx ingress controller not working but about a limitation when it is deployed on several clusters within the same Exoscale account due to the service.beta.kubernetes.io/exoscale-loadbalancer-name: nginx-ingress-controller annotation that could not be used several times (even on different clusters).

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:30:48Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Exoscale
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools: SKS managed cluster installed with exo cli
  • Basic cluster related info:
    • kubectl version: client 1.23.3 / server 1.23.4
    • kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP       OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
pool-7cdfe-ctsdb   Ready    <none>   143m   v1.23.3   <none>        194.182.168.49    Ubuntu 20.04.3 LTS   5.4.0-91-generic   containerd://1.5.5
pool-7cdfe-kbtdr   Ready    <none>   143m   v1.23.3   <none>        194.182.168.153   Ubuntu 20.04.3 LTS   5.4.0-91-generic   containerd://1.5.5
pool-7cdfe-vvtqw   Ready    <none>   143m   v1.23.3   <none>        194.182.171.110   Ubuntu 20.04.3 LTS   5.4.0-91-generic   containerd://1.5.5
  • How was the ingress-nginx-controller installed:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
  • Current State of the controller:
    • kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.1.2
              helm.sh/chart=ingress-nginx-4.0.18
Annotations:  <none>
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.1.2
                          helm.sh/chart=ingress-nginx-4.0.18
Annotations:              service.beta.kubernetes.io/exoscale-loadbalancer-description: NGINX Ingress Controller load balancer
                          service.beta.kubernetes.io/exoscale-loadbalancer-name: nginx-ingress-controller
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-interval: 10s
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-mode: http
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-retries: 1
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-timeout: 3s
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-uri: /
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-instancepool-id: 7cdfe07a-3fc9-452f-acc2-e5a4808d7bfc
                          service.beta.kubernetes.io/exoscale-loadbalancer-service-strategy: source-hash
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.13.166
IPs:                      10.97.13.166
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30950/TCP
Endpoints:                192.168.186.136:80,192.168.81.201:80,192.168.99.201:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31836/TCP
Endpoints:                192.168.186.136:443,192.168.81.201:443,192.168.99.201:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31795
Events:
  Type     Reason                  Age                From                Message
  ----     ------                  ----               ----                -------
  Normal   EnsuringLoadBalancer    23s (x5 over 88s)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  22s (x5 over 88s)  service-controller  Error syncing load balancer: failed to ensure load balancer: Post "https://api-de-fra-1.exoscale.com/v2/load-balancer": invalid request: Load Balancer name is already in use

What happened:

Same nginx ingress LoadBalancer name (specified via the annotation) cannot be used for multiple clusters in the same Exoscale organization, so the LoadBalancer is not created

What you expected to happen:

The LoadBalancers to be created normally even if the ingress controller is installed on several clusters in the same Exoscale account.

How to reproduce it:

Create 2 SKS clusters in the Exoscale console and install the nginx ingress in both of them. The second LoadBalancer will not be correctly created.

A way to get rid of the problem is to remove the annotation 'service.beta.kubernetes.io/exoscale-loadbalancer-name: nginx-ingress-controller' so it takes the service UID by default and unlock the creation of the LoadBalancer.

@lucj lucj added the kind/bug Categorizes issue or PR as related to a bug. label Mar 22, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 22, 2022
@lucj
Copy link
Contributor Author

lucj commented Mar 22, 2022

Pull request fixing the issue: #8365

@longwuyuan
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants