Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create option to reuse an existing ALB instead of creating a new ALB per Ingress #298

Closed
julianvmodesto opened this issue Jan 10, 2018 · 57 comments

Comments

@julianvmodesto
Copy link

@julianvmodesto julianvmodesto commented Jan 10, 2018

I read in this comment #85 (comment) that host-based routing was released for AWS ALBs shortly after ALB Ingress Controller was released.

It would be pretty cool to have an option to reuse an ALB for an Ingress via annotation -- i'd be interested in contributing towards this, but I'm not sure what's needed to make this feasible.

@pperzyna
Copy link

@pperzyna pperzyna commented Mar 12, 2018

@bigkraig Any update?

@mwelch-ptc
Copy link

@mwelch-ptc mwelch-ptc commented Jun 20, 2018

Wait... I guess I missed this in reading the documentation. Are you saying that every Ingress created deploys it's own ALB? So for our 60 or so ingresses we'd end up with 60 ALB's? What about different host names within the same ingress? Does that at least reuse the same ALB?

@patrickf55places
Copy link

@patrickf55places patrickf55places commented Jun 20, 2018

@mwelch-ptc That is correct. There is a 1-to-1 mapping of Ingress resources to ALBs, even if host names are the same.

@kurtdavis
Copy link

@kurtdavis kurtdavis commented Jun 20, 2018

Seems to be fairly costly. We have looked at other solutions due to this issue.

@bigkraig
Copy link
Member

@bigkraig bigkraig commented Jun 20, 2018

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

@ghost
Copy link

@ghost ghost commented Jun 21, 2018

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

This is a general Kubernetes ingress issue, not specific to this ingress controller. I think the discussion of this should be had in a more general forum instead of an issue against this controller.

@whithajess
Copy link

@whithajess whithajess commented Jul 16, 2018

Im tempted to say that this is not a general Kubernetes ingress issue.

Existing load balancers supported by Kubernetes are Layer 4 - And are supported by Ingress Controllers that do the Layer 7 (This means they can use 1 load balancer and then deal with layer 7 when it gets into the cluster)

ALB is Layer 7 and is dealing with it before it gets to Kubernetes, so we cannot assume they are going to change for this use case.

As this becomes more standard i think this could change GCE suggests "If you are exposing an HTTP(S) service hosted on Kubernetes Engine, HTTP(S) load balancing is the recommended method for load balancing." and I would imagine as EKS kicks off it will suggest the same.

@spacez320
Copy link

@spacez320 spacez320 commented Jul 29, 2018

We can already generally do this by having a singular ingress resource, although it makes whatever deployment scheme you're using for Kubernetes have to adjust to that. It's also worth pointing out that in Kubernetes Ingress documentation, it literally states:

An Ingress allows you to keep the number of loadbalancers down to a minimum.

I think it would be really nice to have the ability to do this in a nice way.

@bigkraig
Copy link
Member

@bigkraig bigkraig commented Aug 3, 2018

@spacez320 I read that as that you can have an ingress with multiple services behind it, so a single load balancer for many services as opposed to a load balancer per service.

There is still the issue that the IngressBackend type does not have a way to reference a service in another namespace. I think until the ingress resource spec is changed, there isn't a correct way of implementing something like this.

@bigkraig bigkraig closed this Aug 3, 2018
@patrickf55places
Copy link

@patrickf55places patrickf55places commented Aug 3, 2018

@bigkraig I don't think this issue should be closed. The issue isn't about having a single Ingress resource that can span multiple namespaces. It is about having multiple Ingress resources (possible across different namespaces, but not necessarily) that all use the same AWS application load balancer.

@bigkraig bigkraig reopened this Aug 3, 2018
@bigkraig
Copy link
Member

@bigkraig bigkraig commented Aug 3, 2018

@patrickf55places got it, within the namespace is possible with the spec but I am still unsure how we would organize the routes or resolve conflicts.

@spacez320
Copy link

@spacez320 spacez320 commented Aug 4, 2018

@bigkraig Well, I think it's both, and I think that's what @patrickf55places meant by saying "possible across different namespaces, but not necessarily". We should be able to define an Ingress anywhere and share an Amazon load balancer, I think.

I understand if there's limitations in the spec, though. Should someone go out and try to raise this issue with the wider community? Is that possibly already happening?

@natefox
Copy link

@natefox natefox commented Aug 15, 2018

What about using something similar to how nginx ingress handles it?
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

@joegoggins
Copy link

@joegoggins joegoggins commented Aug 30, 2018

I was glad to find this GitHub issue and also bummed that it seems like it will be a long time before this will get implemented. It smells like there is a lot of complexity associated with the change and potentially not resources to dig into it. I'm assuming it will be many months and thus our engineering team is going to switch our technical approach to use a different load balancing ingress strategy with AWS costs that scale economically in-line with our needs. If that assessment feels wrong, please let me know.

@jakubkulhan
Copy link

@jakubkulhan jakubkulhan commented Sep 26, 2018

I've created another ingress controller that combines multiple ingress resources into a new one => https://github.com/jakubkulhan/ingress-merge

Partial ingresses are annotated with kubernetes.io/ingress.class: merge, merge ingress controller processes them and outputs a new ingress annotated with kubernetes.io/ingress.class: alb, then ALB ingress controller takes over and creates single AWS load balancer.

@marcosdiez
Copy link

@marcosdiez marcosdiez commented Jan 25, 2019

Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :)

@kainlite
Copy link

@kainlite kainlite commented Jan 28, 2019

I'm currently using ingress-merge and while it works I'm having issues with the health checks as the services that I'm exposing do different things by default and we don't have a standard health check url for all microservices, do you have a solution for this issue?, I think the limitation comes from aws-alb-ingress-controller rather than ingress-merge, but if there is a way to have different health checks that would be awesome. Thanks everyone for your effort.

@kainlite
Copy link

@kainlite kainlite commented Jan 28, 2019

@fygge on slack gave me the answer:

You can put the health check annotation on the service instead of on the ingress resource. Thereby having one health check per target group / service.

Tested and works ok.

@mlsmaycon
Copy link

@mlsmaycon mlsmaycon commented Feb 17, 2019

What about using something similar to how nginx ingress handles it?
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

ALB does that with listener rules priority, where a new rule gets lower priority than existing rules (excluded default rule). Problem will be if you set a priority number that conflicts with an existing rule.

Maybe a new kind of ingress controller is in call here, something that with the Ingress object, controls only target groups and listener rules and attach then to a ALB created at the controller configuration(or at first object request). This is what this issue is asking with the current controller, but for larger organization, this could bring complexity with path or host-header rules, causing problems with overlapping Ingress objects.

@styk-tv
Copy link

@styk-tv styk-tv commented Feb 20, 2020

I will go a bit off topic here > ALB to ELB (Classic). I have been using ELB-C for the past few years and running across very many applications. Single ingress, wildcard certificates terminated at ELB and multi-node ingress. I've have tried HAProxy, Traefik and Nginx (default) with success. If you really push for (differently priced ALB) then its really your issue, if you just want to solve the problem and move on, going back to ELB-Classic may not be a bad idea. It works! Unless you have some extreme examples of why you need multiple cloud load balancers. There are some issues always with a small amount of badly written applications (reverse proxy, session stickyness, oauth redirect) but for the most part you can still get them to work on a single LB.

At the end its a cost analysis issue. If you want to reduce 5 to 1 and your time is expensive then don't do it. If you have 1000 load balancers and you can reduce it to 3, it might be worth a while. You create ELBC outside of Ingress type automation, then you can still map all your services through ELBC, just don't specify the loadbalancer type and manually point the ELBC at all ingress nodeports.

You only need to do it once. then when set up, all your ingress definitions will just intercept "host" header with the name and send traffic to the appropriate service. And best part, once you have ELBC in place with SSL wildcard termination (for those cool services or development clusters), then for every plain ingress (with no tls nor loadbalancer defs) you create it just works. No work required.

And you can take advantage of default backend and take it further into the app where you can handle unlimited number of subdomains on a single app all through valid SSL and single LB. I have been doing this since 2004 and now with K, its just fun.

Don't ask AMZ for advice, we love them, but they will just come in and do a session with all your developers and convince you that its perfectly ok to use as many ALB's as you want. They will not tell you its required, but they will plaster all possible documentation with examples where a single yaml section "loadbalancer" looks so innocent to you and makes their bottom line very very happy.

@ratulbasak
Copy link

@ratulbasak ratulbasak commented Apr 15, 2020

Any update on this issue?

@mayconritzmann
Copy link

@mayconritzmann mayconritzmann commented Apr 17, 2020

I have a similar problem.

Each ingress created in the eks cluster, an alb in my environment goes up.

Anyone else with this problem?

@brunojcm
Copy link

@brunojcm brunojcm commented Apr 17, 2020

@ritzmann94 this is not a "problem", it's a "lack of feature", I'd say, and that's all this issue is all about. Everyone here is on the same boat.

I personally have been using a NLB with ingress-nginx, all my ingresses objects get merged on a single nginx and share the same NLB. I know it's not officially supported, but I got sick of waiting for ALB support.

@mayconritzmann
Copy link

@mayconritzmann mayconritzmann commented Apr 17, 2020

Hello @brunojcm, when you said: "I know that it is now officially supported, but I was tired of waiting for ALB's support".

I didn't understand it very well, at least in the official documentation we don't have any reference to use just 1 ALB.

@brunojcm
Copy link

@brunojcm brunojcm commented Apr 17, 2020

Hello @brunojcm, when you said: "I know that it is now officially supported, but I was tired of waiting for ALB's support".

I didn't understand it very well, at least in the official documentation we don't have any reference to use just 1 ALB.

Sorry, I got auto-corrected, I meant "not", not "now". I was talking about NLB support, the one I ended up using, not ALB. ALB is supported, just not sharing the same ALB for multiple ingresses. NLB is still in alpha/beta (sorry if I'm not up-to-date here), but works with ingress-nginx and supports multiple ingresses sharing the same NLB.

@mayconritzmann
Copy link

@mayconritzmann mayconritzmann commented Apr 17, 2020

Understand, I need to deploy AWS WAF in my environment and being in NLB with ingress-nginx is not supported.

Let's wait for this new feature to come.

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jul 16, 2020

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@Tzrlk
Copy link

@Tzrlk Tzrlk commented Jul 17, 2020

/remove-lifecycle stale

@msolimans
Copy link

@msolimans msolimans commented Jul 29, 2020

Any updates?

@vprus
Copy link

@vprus vprus commented Jul 30, 2020

For avoidance of doubt, this issue is actually fixed in a 1.2 alpha release, specifically docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1 and I have a dozen of ingresses sharing single ALB. But, that alpha release was made a year ago. It would be very nice to get some clarity whether it's coming in any official form.

@kirkdave
Copy link

@kirkdave kirkdave commented Aug 5, 2020

Would be great to get this feature either merged into the latest code or revitalise v1.2

I would love to use this feature, but also want to be able to use IAM Role for Service Accounts in EKS and having tested the tag v1.0.0-alpha.1 that support hasn't been merged in (works great on v1.1.8).

@dmanchikalapudi
Copy link

@dmanchikalapudi dmanchikalapudi commented Aug 13, 2020

For avoidance of doubt, this issue is actually fixed in a 1.2 alpha release, specifically docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1 and I have a dozen of ingresses sharing single ALB. But, that alpha release was made a year ago. It would be very nice to get some clarity whether it's coming in any official form.

How did you get that to work? Can you share the ingress definitions for a couple applications? Are you getting it to work by specifying same ingress name but define a different rule in both? Also, are you using a package / deployment manager like Helm 3? I believe it has validations to not deploy an existing resource (The request to create the ingress does not even get to k8s)

@vprus
Copy link

@vprus vprus commented Aug 13, 2020

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1
@dmanchikalapudi
Copy link

@dmanchikalapudi dmanchikalapudi commented Aug 14, 2020

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1

Thanks ! I will give it a try and confirm if its working for our use-cases.

Btw - #914 states that this is not production ready and should not be used. Any idea when it will be available as an official release?

@ffjia
Copy link

@ffjia ffjia commented Aug 26, 2020

IAM Role for Service Accounts
@kirkdave did you make the ALB ingress IRSA work in EKS?

@kirkdave
Copy link

@kirkdave kirkdave commented Aug 27, 2020

@ffjia It works with IRSA if you either build the Docker image from the branch or, as I did, use the image that @M00nF1sh created in #914 - m00nf1sh/aws-alb-ingress-controller:v1.2.0-alpha.2

@rajeshwrn
Copy link

@rajeshwrn rajeshwrn commented Sep 5, 2020

I was came across the same kind of requirement.

Have to deploy 20+ services in kubernetes in AWS Fargate profile. Since the fargate is not support NLB as of now the only option is ALB. But each deployment created new Alb. This needs 20+ public ips and also more cost on alb.

I achieved the solution with two ingress controller, alb ingress and nginx ingress.

Nginx will be the target for alb with port 80. And application services running in cluster with different port and namespaces will communicate nginx.

I have documented my solution. I think it will help your requirement.

https://github.com/rajeshwrn/alb-nginx-controller

@MXClyde
Copy link

@MXClyde MXClyde commented Oct 6, 2020

Any timelines on when this functionality will be part of a stable release?

@astrived
Copy link

@astrived astrived commented Oct 20, 2020

@MXClyde we are doing the final phase of testing for the new aws-loadbalancer-controller and the RC image is here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v2.0.0-rc5. You can find details here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/tree/v2_ga. Stay tuned for What's new coming soon !

@haalcala
Copy link

@haalcala haalcala commented Jan 31, 2021

Can you guys improve your documentation system like how mongodb or elasticsearch does it? Like in a way that I can select which version of the library/system documentation I want to view? I mean, can you not overwrite the published url with the latest one, coz I have to maintain some old versions, coz it gets confusing trying remember what I did before (with 1.1.4 for example) and looking at version 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet