Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress claims #30151

Closed
bprashanth opened this issue Aug 5, 2016 · 18 comments
Closed

Ingress claims #30151

bprashanth opened this issue Aug 5, 2016 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@bprashanth
Copy link
Contributor

bprashanth commented Aug 5, 2016

The initial ingress proposal hinted at this (https://github.com/kubernetes/kubernetes/pull/12827/files#diff-41f2bde570ebc813183b7cd0a96a7e04R106) and I'm starting to see a need for it as Ingress evolves.

We need a way to claim a DNS name and bucket it to a QoS tier so you can say "I want a bronze ingress to loadbalance foo.com" and not have to worry about:

  • any one else stealing foo.com
  • what provides the ip behind foo.com
  • provisioning the DNS record with that ip and keeping it in sync in case the ip changes

Here's what I'm proposing at a high level:

  • API
    • Ingress claim object that contains: a list of hostnames, a QoS tier
    • A pointer from Ingress to a single claim
  • Provisioning
    • The claim is created by the user
    • The claim is provisioned by a controller, just like the ingress pointing to the claim.
    • The claim-provisioner might insert cluster local records ("internal" ingress), public records, or both.
  • Validation
    • The apiservers validates that claims don't clash.
    • If an Ingress has routing rules for a host that isn't in the claim it points to, those rules should no-op, this is enforced at the ingress controller layer not at apiserver validation time.
  • Claims can exist without ingresses and viceversa.
    • An ingress created without a claim does not exclusively own the hostnames in its rules map, meaning if I create an ingress with foo.com rule, and someone else creates a claim with foo.com rule, my ingress stops working unless I point it at the claim (somewhat consistent with burstable vs guaranteed QoS -- apps following strict spec get guarantees).
    • A claim created without an Ingress is just a static-ip bound to DNS.
  • Multiple ingresses can point at a single claim but an Ingress can only reference one claim
    • merge path rules in the ingress controller, which implements some policy like longest prefix. If the same path points to 2 different Services, we sort the competing Ingresses by creation timestamp and take the first one (consistent with overlapping selectors).
  • TLS is completely left to the ingress. This means a claim with a list of DNS names can be handled via http, a wildcard/san, SNI.

@kubernetes/sig-network @smarterclayton

@bprashanth
Copy link
Contributor Author

Ports are an open question. Many cloudproviders restrict requestable ports, but if we allow L4 ingress we need an exclusive lock on an arbitrary port.

@ddysher
Copy link
Contributor

ddysher commented Oct 13, 2016

@bprashanth as mentioned in the other thread, we'll take ingress claim (and possibly more on ingress). @mqliang will update the write up, any objections?

@thockin
Copy link
Member

thockin commented Oct 13, 2016

Is the ingress claim acting as an IP address placeholder? I'm not sure I
understand...

In the storage analog, there is a global resource - actual storage -
against which a claim denotes ownership. What is the global resource?
When we spoke of ingress claims before, I was thinking about IPs, but I am
not sure what you're thinking here..

You mention validation, but we don't really do cross-object validation,
today. If the object name was the sole thing to cross-check, you get that
for free, but if you need more than that it is harder.

On Wed, Oct 12, 2016 at 7:37 PM, Deyuan Deng notifications@github.com
wrote:

@bprashanth https://github.com/bprashanth as mentioned in the other
thread #34013, we'll take
ingress claim (and possibly more on ingress). @mqliang
https://github.com/mqliang will update the write up, any objections?


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#30151 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVJEN2zIBgAPlBbwPhGYyKfmBiMsSks5qzZl9gaJpZM4Jd55U
.

@mqliang
Copy link
Contributor

mqliang commented Oct 13, 2016

Is the ingress claim acting as an IP address placeholder? I'm not sure I
understand...

IIUC, it should be a Ingress Service(backed by several Ingress Pods) act as an VIP address placeholder.

What is the global resource

I think it should be Ingress Service. That is: cluster admin deploy all kinds of Ingress Pods: nginx, haproxy or cloud lb, and create several Ingress Service with vip. And when user create Ingress Resource, he can claim "I want a Ingress Service to loadbalance foo.com for me" by creating a IngressClaim.

@thockin
Copy link
Member

thockin commented Oct 13, 2016

I do not understand. what you are describing is classes (analog to
StorageClass) not claims..

On Wed, Oct 12, 2016 at 11:05 PM, Liang Mingqiang
notifications@github.com wrote:

Is the ingress claim acting as an IP address placeholder? I'm not sure I
understand...

IIUC, it should be a Ingress Service(backed by several Ingress Pods) act as an VIP address placeholder.

What is the global resource

I think it should be Ingress Service. That is: cluster admin deploy all kinds of Ingress Pods: nginx, haproxy or cloud lb, and create several Ingress Service with vip. And when user create Ingress Resource, he can claim "I want a Ingress Service to list&watch this Ingress Resource and forward request base on the rules defined in the Ingress Resource for me"


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@mqliang
Copy link
Contributor

mqliang commented Oct 13, 2016

For example:

  • Cluster admin create:
    • a nginx-class IngressService A(backend by several nginx pods for HA), with a VIP 172.10.10.1
    • a haproxy-class IngressService B(backend by several haproxy pods for HA), with a VIP 172.10.10.2
  • a user claim "I want a nginx class IngressService to loadbalance foo.com for me" by creating a IngressClaim.
  • ingress-claim-controller will bind IngressService A with the IngressClaim.
  • before binding, IngressService will _NOT_ loadbalance any request
  • IngressService A will _ONLY_ loadbalance for foo.com after binding
  • user get the ip 172.10.10.1 from the IngressClaim status, then config DNS so foo.com will be resolved as 172.10.10.1
  • another user then claim "I want a gce-lb class IngressService to loadbalance bar.com for me"
  • since there is no gce-lb class IngressService exist, ingress-claim-controller need dynamically provision one.

@ddysher
Copy link
Contributor

ddysher commented Oct 13, 2016

As briefly outlined in this comment, i'm also not sure if ingress claim should hold vip. I tend to agree with @mqliang , but we could be wrong as there seems to be a lot of discussions around ingress.

If we follow lessons from storage class, this really should be two things, one that actually hold the vip (IngressService), and one that hold a 'flavor' or 'profile' (IngressClass) ?

a nginx-class IngressService A(backend by several nginx pods for HA),

@thockin
Copy link
Member

thockin commented Oct 14, 2016

That's not a claim, that's a class. If you want to draw analogies...

Pod uses PVClaim
PVClaim references a StorageClass by name
StorageClass represents the implementation of a PV
PVClaim represents the right to use a particular PV instance
PV represents a concrete piece of storage
When the PVClaim is deleted, the cleanup policy dictates what happens to
the PV

in real terms:

I (the user) hold a lease (a claim) on a car
The lease says the car is to be a Lamborghini Aventador (the class)
When I bought the lease, I was given a particular car (the instance)
As long as my lease is paid up, I can drive that particular car
When my lease expires, the car will be sold for scrap (the cleanup policy)

in ingress terms, it should be something like:

L7Config uses an IngressClaim
IngressClaim references an IngressClass
IngressClass dictates the actual load-balancer tech (nginx, haproxy, gclb,
...)
IngressClaim represents the right to use a particular Ingress
Ingress represents a concrete HTTP endpoint
When the IngressClaim is deleted, the cleanup policy dictates what happens
to the Ingress

I believe OpenShift (which predated ingress) says L7Config = Route and
Ingress = Router, and there's only one Router, so no claims.

Now, we already used the word "Ingress" to mean both the URL map and the
claim at the same time, so this sort of falls apart. But you can imagine
evolving to a model where the Ingress represents a single load-balancer (be
that a single IP or an ELB-like thing), and something else represents a
desire to use an Ingress. We would bind that request to a particular
ingress, based on the requested class. Where this fundamentally differs
from storage is that an IngressClaim->Ingress binding is non-exclusive (one
ingress with one IP can host many sites)

TL;DR: This bug is talking about classes, not claims. I think both are
interesting.

On Wed, Oct 12, 2016 at 11:30 PM, Liang Mingqiang notifications@github.com
wrote:

For example:

Cluster admin create:

  • a IngressService A(backend by several nginx pods), with a VIP
    172.10.10.1

    • a IngressService B(backend by several haproxy pods), with a VIP
      172.10.10.2

    a user claim "I want a nginx class IngressService to loadbalance
    foo.com for me" by creating a IngressClaim.

  • ingress-claim-controller will bind IngressService A with the
    IngressClaim.

  • before binding, IngressService will *NOT loadbalance any request

    IngressService A will ONLY loadbalance for foo.com after binding

    another user then claim "I want a gce-lb class IngressService to
    loadbalance bar.com for me"

  • since there is no gce-lb class IngressService exist,
    ingress-claim-controller need dynamically provision one.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#30151 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVAYITV4wDH3uWF7jdi-EfvXJD6ezks5qzc_1gaJpZM4Jd55U
.

@smarterclayton
Copy link
Contributor

The claimed resource I most care about is public DNS name. So user A
and user B can't both be exposed under the same ingress controller as
www.google.com unless one of them can prove they own www.google.com.
Although it's only weakly a claim, so other tools can be used to solve
it.

We have moved much further down having multiple routers, at this point
we are seeing lots of multiple router deployments, where the same
route is exposed to different edges with different names or different
networks (i.e. router 1 exposes to the intranet, and router2 exposes a
subset of those routes via labels to the public internet). I think the
use cases for ingress don't necessarily resolve to only one ingress
controller exposing an ingress, unless we can make a really strong
case that there aren't multiple routers and types of networks possible
in most infrastructures.

@mqliang
Copy link
Contributor

mqliang commented Oct 14, 2016

in ingress terms, it should be something like:

L7Config uses an IngressClaim
IngressClaim references an IngressClass
IngressClass dictates the actual load-balancer tech (nginx, haproxy, gclb, ...)

Can not agree more. This is exactly I want to express.

In bare-metal environment, we want load-balancer (nginx, haproxy) to be HA, so I introduced a "Ingress Service" term: IngressClaim was bound to a IngressService (backended by the actually load-balancer), instead of the load-balancer directly.

I believe OpenShift (which predated ingress) says L7Config = Route and
Ingress = Router, and there's only one Router, so no claims.

In single-tenant case, it's reasonable to have "only one router, no claims" since one cluster usually only has one DNS name. But in multi-tenant case, one load-balance with one IP can host many sites, so we need the IngressClaim mechanism:

  • allow administrators to create a taxonomy of loadbalancer, and allow user to claim on those existing loadbalancers. And since IngressClaim->Loadbalancer binding is non-exclusive, we may need a scheduler to make metrics-based binding decision.
  • allow user to dynamically provision loadbalancers.

@mqliang
Copy link
Contributor

mqliang commented Oct 14, 2016

@smarterclayton

I think the use cases for ingress don't necessarily resolve to only one ingress controller exposing an ingress

Yes, one load-balance with one IP can host many sites, and again, that's why I proposed "ingress scheduler" to make metrics-based binding decision in #34013

@thockin
Copy link
Member

thockin commented Oct 14, 2016

On Fri, Oct 14, 2016 at 7:02 AM, Clayton Coleman
notifications@github.com wrote:

The claimed resource I most care about is public DNS name. So user A
and user B can't both be exposed under the same ingress controller as
www.google.com unless one of them can prove they own www.google.com.
Although it's only weakly a claim, so other tools can be used to solve
it.

This is solvable within kubernetes, perhaps, but we'd need a clearer
picture of all the below claims and classes and what they mean. This
has languished because of lack of ownership bandwidth.

We have moved much further down having multiple routers, at this point
we are seeing lots of multiple router deployments, where the same
route is exposed to different edges with different names or different
networks (i.e. router 1 exposes to the intranet, and router2 exposes a
subset of those routes via labels to the public internet). I think the
use cases for ingress don't necessarily resolve to only one ingress
controller exposing an ingress, unless we can make a really strong
case that there aren't multiple routers and types of networks possible
in most infrastructures.

This is largely in alignment with what we are seeing, and part of why
I wrote this up. The distinction between the URL map and the actual
ingress point to cluster might be important. Ingress today covers
both.

Additionally, I think it is interesting, especially in an env like
Google Cloud, which has a lot of degrees of freedom in the API, to
offer a single-IP L7 Ingress option and an IP-per-urlmap option. To
achieve that, we NEED something like classes

@marun
Copy link
Contributor

marun commented Oct 18, 2016

Why should an ingress claim relate public DNS names and QOS tier? Wouldn't it make more sense to have separate claims for each?

@marun
Copy link
Contributor

marun commented Oct 18, 2016

I think reserved DNS names would make sense in all cases. I'm less clear why QOS would be something that ingress controllers should be required to support.

@knobunc
Copy link
Contributor

knobunc commented Oct 18, 2016

Over on OpenShift we have customers who want to claim a path too. So, they have a hosting site that has a standard dns name, but each project wants to claim the first directory in the URL. So, for our existing router we might accommodate this by having a "CLAIM_PATH_DEPTH" argument to the router pod, and then it would assign the routes to the namespaces based on the depth given (it would default to 0).

I'm not sure how to handle that with claims. Perhaps we put in a domain-level claim that says "allow sub-claims of this directory depth below me"?

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

9 participants