Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how IPs map to ingresses for different controllers #276

Closed
neelance opened this issue Feb 14, 2017 · 7 comments
Closed

Document how IPs map to ingresses for different controllers #276

neelance opened this issue Feb 14, 2017 · 7 comments

Comments

@neelance
Copy link

I had the same confusion initially. Container Engine's "L7" ingress controller isn't implemented using the familiar model that others have described here.
(source: comment on thread mentioned below)

It seems like GLBC follows a different model than other ingress controllers regarding how Ingress resources map to external IP addresses. This causes some confusion when using ingresses for the first time, especially since GLBC is provided by default on GKE.

Please see the comments on the following issue, especially my summary of the situation: kubernetes/kubernetes#17088 (comment)

@bprashanth
Copy link
Contributor

Each ingress satisfied through the GCE gets a distinct VIP/loadbalancer etc. The reason for this is we can't allocate a single staticip to multiple forwarding rules pointing at different url maps, and there's a relatively low limit to the number of hostnames/paths you can put in a single GCE url map. This also simplifies conflict resolution (eg: 2 ingresses asking for same hostname, or the default backend specification etc).

Each ingress satisfied through nginx gets the same IP as the node the nginx pod is running on. The IP/url-map limitations aren't as strict on nginx, but at the same time, users don't as much isolation (a high qps hostname can completely bog the nginx instance).

We should clarify this in the docs if it isn't already clear. Right now it is really up to the controllers, eg someone can write an haproxy controller that actuallys spins up a haproxy pod per ingress and that would match what GCE does, or just have a single haproxy pod watch ingresses and do what the nginx controller does. Down the line we will probably have a claims model that explicitly defines how controllers join/dont-join ingresses.

If you have the time, I suggest sticking a formatter version of that in https://github.com/kubernetes/ingress/blob/master/docs/admin.md. If you don't, I'll do it whenever I do.

@bprashanth bprashanth changed the title confusion about how GLBC maps ingresses to external IP addresses Document how IPs map to ingresses for different controllers Feb 14, 2017
@neelance
Copy link
Author

Thanks for your quick and detailed answer.

Seems like the behavior of an ingress controller is not defined that strictly yet. Imho that's a bit unfortunate since in general Kubernetes does a really nice job with abstracting from the underlying implementations, so one generally expects the same with ingresses. Now it turns out that an Ingress really is just a chunk of data that can be processed in different ways by the ingress controller.

I think it would be best to add some more documentation to the https://kubernetes.io/docs/user-guide/ingress/#name-based-virtual-hosting section. I'm not super familiar with the topic, so it might be best if you come up with some good documentation.

I'm looking forward to that claims model you mentioned, that may make things much clearer.

@aledbf
Copy link
Member

aledbf commented Feb 14, 2017

@neelance please check kubernetes/kubernetes#30151

@bprashanth
Copy link
Contributor

Seems like the behavior of an ingress controller is not defined that strictly yet. Imho that's a bit unfortunate since in general Kubernetes does a really nice job with abstracting from the underlying implementations, so one generally expects the same with ingresses. Now it turns out that an Ingress really is just a chunk of data that can be processed in different ways by the ingress controller.

This space is unique in that there is a lot of fragmentation between backends (in comparison, there's really only 2 container runtimes, each one with fewer tunables than a webserver, and many storage providers but MUCH fewer tunables for each), and we don't want to maintain all of them in tree (maintaining Service.Type=LoadBalancer in tree turned out to be a net drag on the project, and there's no bare metal counterpart for the same reason, because no one wrote a Type=LB backend for cloud=bare-meta). So we have a plugin model for L7. Cloudproviders deploy a default configuration that programs cloud lbs, but users are free to either swap them out for their own, or run them in a pipeline.

The doc you listed calls out the need for ingress controllers (https://kubernetes.io/docs/user-guide/ingress/#ingress-controllers). I think it would be neat to create an "uber" nginx controller that deployed an nginx per Ingress, but not enough people have asked for it :)

@neelance
Copy link
Author

@aledbf Thanks for the link.

@bprashanth Yes, it calls out the need for ingress controllers, but as a beginner your likely first choice is to use the GLBC provided by GKE and you just expect it to do roughly the same as the other ingress controllers do. Imho the "Name based virtual hosting" section could at least use a hint along the lines of "Depending on the ingress controller this may also be achieved by using multiple Ingress resources, even in different namespaces."

@bprashanth
Copy link
Contributor

Yeah I think that'd be a useful clarification, please open a doc pr if you have time https://kubernetes.io/editdocs/#docs/index.md

@neelance
Copy link
Author

With which text? The exact sentence I suggested above?

@aledbf aledbf closed this as completed Oct 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants