L7 Loadbalancing #12827

Closed
wants to merge 2 commits into
from

Conversation

Projects
None yet
9 participants
@bprashanth
Member

bprashanth commented Aug 17, 2015

It's difficult to justify the proposal without hinting at a grand unified theory of loadbalancing. Reference implementation to make what's in scope for 1.1 clear: #12825

@thockin @smarterclayton @pweil- @bgrant0607

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 17, 2015

GCE e2e build/test passed for commit a2ccaa8.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
Contributor

smarterclayton commented Aug 17, 2015

docs/proposals/loadbalancing.md
+
+### What does the apiserver validate vs the loadbalancer controller?
+
+Not all loadbalancer backends will support all IngressPath configuration. This will be some confusion around validation errors. The apiserver will validate api constituents and nothing more (i.e that an IngressPath points to a valid service, that the tlsmodes match up to the provided secrets etc).

This comment has been minimized.

@smarterclayton

smarterclayton Aug 17, 2015

Contributor

It's sufficient if we have a way for load balancer controllers to write back their status in a future revision

@smarterclayton

smarterclayton Aug 17, 2015

Contributor

It's sufficient if we have a way for load balancer controllers to write back their status in a future revision

docs/proposals/loadbalancing.md
+
+### How does public DNS work?
+
+Most of this is TDB. With the current model, if someone specifies a hostname the loadbalancer controllers assume they know what they're doing. If there are 2 IngressPaths with the same hostname and url endpoints, one of them will win. This is just like the overlapping labels rc problem.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

This is ultimately a policy and security decision that admins make (an admin may want to split a workload between two namespaces at a balancer, and thus allow hosts to conflict). A different controller may chose an ordering to impose where first comes wins. There are too many aspects to have one solution. Replaceable controllers is more appropriate.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

This is ultimately a policy and security decision that admins make (an admin may want to split a workload between two namespaces at a balancer, and thus allow hosts to conflict). A different controller may chose an ordering to impose where first comes wins. There are too many aspects to have one solution. Replaceable controllers is more appropriate.

docs/proposals/loadbalancing.md
+ name: foobarsvc
+ port:
+ name: foobarport
+ port: 80

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Putting port here may be too prescriptive. An F5 may impose limits. The controller may be able to default/ignore/use these as guidelines.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Putting port here may be too prescriptive. An F5 may impose limits. The controller may be able to default/ignore/use these as guidelines.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

I think these should be requested ports and the actual ports should be in the status.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

I think these should be requested ports and the actual ports should be in the status.

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

Confused by your comment, these are service ports. The current resource has no way to request a specific loadbalancer port, one will be allocated and show up in the status. In the claims model the claim can have an ip:port. I was hoping to punt on port requests for now.

@bprashanth

bprashanth Aug 18, 2015

Member

Confused by your comment, these are service ports. The current resource has no way to request a specific loadbalancer port, one will be allocated and show up in the status. In the claims model the claim can have an ip:port. I was hoping to punt on port requests for now.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

It was not at all clear what port, targetPort pointed to. Comments would be good. I assumed targetPort was the port on the endpoints.

Is nodePort necessary? A load balancer that points to the endpoints directly doesn't need nodePort? I assumed most loadbalancers would bypass the service and just look at endpoints.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

It was not at all clear what port, targetPort pointed to. Comments would be good. I assumed targetPort was the port on the endpoints.

Is nodePort necessary? A load balancer that points to the endpoints directly doesn't need nodePort? I assumed most loadbalancers would bypass the service and just look at endpoints.

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

Ironically, nodePort is the only thing we need on gce and aws. Both are very instance centric, we can't plug endpoint ips into the loadbalancer. The ports here are ServicePort objects (https://github.com/kubernetes/kubernetes/pull/12825/files#diff-9ff06ad723720f9428a65da3710cf436R1140) so the targetPort is the port on the endpoint.

@bprashanth

bprashanth Aug 18, 2015

Member

Ironically, nodePort is the only thing we need on gce and aws. Both are very instance centric, we can't plug endpoint ips into the loadbalancer. The ports here are ServicePort objects (https://github.com/kubernetes/kubernetes/pull/12825/files#diff-9ff06ad723720f9428a65da3710cf436R1140) so the targetPort is the port on the endpoint.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Would it then not be clearer to nest this under the service reference
somehow? The two together certainly were confusing in a casual view.

On Mon, Aug 17, 2015 at 9:00 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

  • "/bar/*":
  •    service:
    
  •        kind: Service
    
  •        name: barsvc
    
  •    port:
    
  •        name: barport
    
  •        port: 80
    
  •        targetPort: 8081
    
  •        nodePort: 3232
    
  • "/foobar/*":
  •    service:
    
  •        kind: Service
    
  •        name: foobarsvc
    
  •    port:
    
  •        name: foobarport
    
  •        port: 80
    

Ironically, nodePort is the only thing we need on gce and aws. Both are
very instance centric, we can't plug endpoint ips into the loadbalancer.
The ports here are ServicePort objects (
https://github.com/kubernetes/kubernetes/pull/12825/files#diff-9ff06ad723720f9428a65da3710cf436R1140)
so the targetPort is the port on the pod.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37254931.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Would it then not be clearer to nest this under the service reference
somehow? The two together certainly were confusing in a casual view.

On Mon, Aug 17, 2015 at 9:00 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

  • "/bar/*":
  •    service:
    
  •        kind: Service
    
  •        name: barsvc
    
  •    port:
    
  •        name: barport
    
  •        port: 80
    
  •        targetPort: 8081
    
  •        nodePort: 3232
    
  • "/foobar/*":
  •    service:
    
  •        kind: Service
    
  •        name: foobarsvc
    
  •    port:
    
  •        name: foobarport
    
  •        port: 80
    

Ironically, nodePort is the only thing we need on gce and aws. Both are
very instance centric, we can't plug endpoint ips into the loadbalancer.
The ports here are ServicePort objects (
https://github.com/kubernetes/kubernetes/pull/12825/files#diff-9ff06ad723720f9428a65da3710cf436R1140)
so the targetPort is the port on the pod.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37254931.

Clayton Coleman | Lead Engineer, OpenShift

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

fair enough, will give it some thought

@bprashanth

bprashanth Aug 18, 2015

Member

fair enough, will give it some thought

This comment has been minimized.

@bprashanth

bprashanth Aug 24, 2015

Member

PTAL new resource. It's more succinct, though controllers might need to lookup the service I think this should be ok, since they're probably already watching it.

@bprashanth

bprashanth Aug 24, 2015

Member

PTAL new resource. It's more succinct, though controllers might need to lookup the service I think this should be ok, since they're probably already watching it.

docs/proposals/loadbalancing.md
+
+In the context of 1.1 a cluster will only have one l7 loadbalancer controller that claims all IngressPaths so we don't need claims. In the larger scheme of things, we might still not need claims, however they solve the impedance mismatch problem (why do I need a loadbalancer to expose my service?) because that is how you get an ip. If you want to expose your service, you need a public ip for it. No matter where you're running you can get one by creating a claim. If you want something to use an ip you already have, create a claim for it.
+
+The downside is the user needs to create another resource. We can make this easier by defaulting claim-less ingress paths to a new claim.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

An ingress is close enough to a claim for now that practical claim and explicit scenarios can coexist.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

An ingress is close enough to a claim for now that practical claim and explicit scenarios can coexist.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Unlike pods, I think ingress points can and will change who consumes them over time.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Unlike pods, I think ingress points can and will change who consumes them over time.

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

that's why I want claims in the long run (to achieve that separation). I might've misunderstood your second comment if you're arguing against explicity claims.

@bprashanth

bprashanth Aug 18, 2015

Member

that's why I want claims in the long run (to achieve that separation). I might've misunderstood your second comment if you're arguing against explicity claims.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

I'm arguing against claims in the short / medium term. An Ingress is a request, and claims can be implicit by each controller, or implemented via annotations if necessary.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

I'm arguing against claims in the short / medium term. An Ingress is a request, and claims can be implicit by each controller, or implemented via annotations if necessary.

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

Agreed, I'm not planning on claims before October. Being explicity about which ip one wants to join has some advantages but I don't think we need to reach a decision right now.

@bprashanth

bprashanth Aug 18, 2015

Member

Agreed, I'm not planning on claims before October. Being explicity about which ip one wants to join has some advantages but I don't think we need to reach a decision right now.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

It hasn't come up for us yet except when ops wants to fine tune a
particular case, and in all of those scenarios they can configure the
routers as they need.

On Mon, Aug 17, 2015 at 9:21 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

  •        name: foobarport
    
  •        port: 80
    
  •        targetPort: 8082
    
  •        nodePort: 3233
    
    +status:
  • include host here when we have DNS figured out.

  • loadBalancer:
  • ingress:
    • ip: 104.101.11.313
      +```

+#### Claims
+
+In the context of 1.1 a cluster will only have one l7 loadbalancer controller that claims all IngressPaths so we don't need claims. In the larger scheme of things, we might still not need claims, however they solve the impedance mismatch problem (why do I need a loadbalancer to expose my service?) because that is how you get an ip. If you want to expose your service, you need a public ip for it. No matter where you're running you can get one by creating a claim. If you want something to use an ip you already have, create a claim for it.
+
+The downside is the user needs to create another resource. We can make this easier by defaulting claim-less ingress paths to a new claim.

Agreed, I'm not planning on claims before October. Being explicity about
which ip one wants to join has some advantages but I don't think we need to
reach a decision right now.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37255972.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

It hasn't come up for us yet except when ops wants to fine tune a
particular case, and in all of those scenarios they can configure the
routers as they need.

On Mon, Aug 17, 2015 at 9:21 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

  •        name: foobarport
    
  •        port: 80
    
  •        targetPort: 8082
    
  •        nodePort: 3233
    
    +status:
  • include host here when we have DNS figured out.

  • loadBalancer:
  • ingress:
    • ip: 104.101.11.313
      +```

+#### Claims
+
+In the context of 1.1 a cluster will only have one l7 loadbalancer controller that claims all IngressPaths so we don't need claims. In the larger scheme of things, we might still not need claims, however they solve the impedance mismatch problem (why do I need a loadbalancer to expose my service?) because that is how you get an ip. If you want to expose your service, you need a public ip for it. No matter where you're running you can get one by creating a claim. If you want something to use an ip you already have, create a claim for it.
+
+The downside is the user needs to create another resource. We can make this easier by defaulting claim-less ingress paths to a new claim.

Agreed, I'm not planning on claims before October. Being explicity about
which ip one wants to join has some advantages but I don't think we need to
reach a decision right now.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37255972.

Clayton Coleman | Lead Engineer, OpenShift

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

Without some form of explicit joining a loadbalancer controller will have to commit to either joining everything by default, or nothing at all. Both sides have their demerits. I see claims as solving 2 problems, each of which we can solve individually:

  1. which class of loadbalancer to join
  2. which specific loadbalancer to join

1 is more important, because multiple classes are certainly going to exist on cloud clusters. Especially if people are porting over existing configs and just want to try out this cloud thing. However 1 is also solvable by shoving the class into the ingress path.

2 is important because cloud loadbalancers have some artificial constraints (eg: gce won't allow multiple certs per loadbalancer ip). Having the user tell the controller what they want to join is a lot easier than having the controller second guess the loadbalancer featureset the cloud provider supports (which usually varies with time of day).

As a bonus, claims decouple getting a public ip with setting up a loadbalancer. I can have a class=cheap-and-dirty that exposes getting-started services on a non loadbalanced ip via a claim. On gce all that would do is use a node ip and setup the right firewall rules.

@bprashanth

bprashanth Aug 18, 2015

Member

Without some form of explicit joining a loadbalancer controller will have to commit to either joining everything by default, or nothing at all. Both sides have their demerits. I see claims as solving 2 problems, each of which we can solve individually:

  1. which class of loadbalancer to join
  2. which specific loadbalancer to join

1 is more important, because multiple classes are certainly going to exist on cloud clusters. Especially if people are porting over existing configs and just want to try out this cloud thing. However 1 is also solvable by shoving the class into the ingress path.

2 is important because cloud loadbalancers have some artificial constraints (eg: gce won't allow multiple certs per loadbalancer ip). Having the user tell the controller what they want to join is a lot easier than having the controller second guess the loadbalancer featureset the cloud provider supports (which usually varies with time of day).

As a bonus, claims decouple getting a public ip with setting up a loadbalancer. I can have a class=cheap-and-dirty that exposes getting-started services on a non loadbalanced ip via a claim. On gce all that would do is use a node ip and setup the right firewall rules.

docs/proposals/loadbalancing.md
+
+Each IngressPath specifies a single secret and a tls mode. Its upto the loadbalancer controller to handle the tls mode. For example, the gce controller will only support TLS termination for 1.1 because each IngressPath gets a new loadbalancer, and GCE doesn't allow multiple certs per IP. With the current design a user can create the wrong type of secret for the wrong mode. They wouldn't know till they try creating the Ingress path. Validating a simple key:value secret is also hard.
+
+Some alternatives to the proposed tls mode handling:

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Even if the user provides a secret the admin may choose not to expose it, or have a policy that controls that. I think we have to allow for the possibility that everything in spec is a request, and either represent the outcome in status or provide an alternate way to communicate that. Ingress as a request with concrete status allows variation among installations but consistency among clients. A particular implementation can simply not report data that is not relevant/possible

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

Even if the user provides a secret the admin may choose not to expose it, or have a policy that controls that. I think we have to allow for the possibility that everything in spec is a request, and either represent the outcome in status or provide an alternate way to communicate that. Ingress as a request with concrete status allows variation among installations but consistency among clients. A particular implementation can simply not report data that is not relevant/possible

+* Tls modes other than termination
+* Persistent sessions
+* Loadbalancing algorithms (cloud provider support at this level is limited)
+* Loadbalancer health checks: are they even necessary given that pods have probes?

This comment has been minimized.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

To give a concrete example of where OpenShift might go here:

  • provide a conversion between routes and ingress paths, store additional metadata on annotations for now
  • continue to want to support all the existing modes (pass through, reencrypt, etc)
  • each router instance is effectively a load balancer controller, so we would add flags to the routers to describe the scope of selected input ingress (label, namespace, or field selectors)
  • implement first come - first served ingress paths for conflicts with flags for rules about what to do for partial conflicts
  • port the internal representation of the config based router to IngressPaths and Endpoints

I don't see any obstacles to the above based on the proposal.

@smarterclayton

smarterclayton Aug 18, 2015

Contributor

To give a concrete example of where OpenShift might go here:

  • provide a conversion between routes and ingress paths, store additional metadata on annotations for now
  • continue to want to support all the existing modes (pass through, reencrypt, etc)
  • each router instance is effectively a load balancer controller, so we would add flags to the routers to describe the scope of selected input ingress (label, namespace, or field selectors)
  • implement first come - first served ingress paths for conflicts with flags for rules about what to do for partial conflicts
  • port the internal representation of the config based router to IngressPaths and Endpoints

I don't see any obstacles to the above based on the proposal.

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

Getting the OpenShift F5 router (which is currently a pending pr iirc) checked in as a loadbalancer controller to kube would be nice. Mostly because there's no way I'm going to get my hands on an F5 to write it on my own.

@bprashanth

bprashanth Aug 18, 2015

Member

Getting the OpenShift F5 router (which is currently a pending pr iirc) checked in as a loadbalancer controller to kube would be nice. Mostly because there's no way I'm going to get my hands on an F5 to write it on my own.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

👍 And hopefully that should land soon.

@ramr

ramr Aug 26, 2015

Member

👍 And hopefully that should land soon.

docs/proposals/loadbalancing.md
+ host: www.example.com
+ tlsMode: Termination
+ secret: foosecret
+ pathMap:

This comment has been minimized.

@bprashanth

bprashanth Aug 18, 2015

Member

@brendandburns suggested including subdomain in a single resource (something I'd planned to achieve via joining). His reason makes sense, from a UX perspective having a single place to manage the ingress points of an entire website is better.

I experimented a little with:
flat list

  pathMap:
  "foo.domain.tld/foo/*"
    service:
    port:
  "foo.domain.tld/bar/*"
    service:
    port:
  "bar.domain.tld/foobar/*"
    service:
    port:

vs nested path matchers

  pathMap:
  "foo.domain.tld"
    "/foo/*":
      service:
      port:
    "/bar/*":
      service:
      port:
  "bar.domain.tld"
    "/foobar/*":
      service:
      port:

I find the latter easier to think about. So I'm goint to convert this from the map of path->backend that it currently is, to a nested map. In the simple case this nested map will only have one subdomain (*). The host in the spec will be used to validate that there's only a single domain per ingress point.

@bprashanth

bprashanth Aug 18, 2015

Member

@brendandburns suggested including subdomain in a single resource (something I'd planned to achieve via joining). His reason makes sense, from a UX perspective having a single place to manage the ingress points of an entire website is better.

I experimented a little with:
flat list

  pathMap:
  "foo.domain.tld/foo/*"
    service:
    port:
  "foo.domain.tld/bar/*"
    service:
    port:
  "bar.domain.tld/foobar/*"
    service:
    port:

vs nested path matchers

  pathMap:
  "foo.domain.tld"
    "/foo/*":
      service:
      port:
    "/bar/*":
      service:
      port:
  "bar.domain.tld"
    "/foobar/*":
      service:
      port:

I find the latter easier to think about. So I'm goint to convert this from the map of path->backend that it currently is, to a nested map. In the simple case this nested map will only have one subdomain (*). The host in the spec will be used to validate that there's only a single domain per ingress point.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 25, 2015

Contributor

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

On Aug 18, 2015, at 8:47 PM, Prashanth B notifications@github.com wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+* Security configuration (i.e tlsMode, secret)
+* Metadata needed for more advanced loadbalancing (e.g session persistence)
+
+See draft pr for an actual description of the resource. See alternatives for why this deviates from existing implementations in OpenShift or GCE:
+
+```yaml
+apiVersion: v1
+kind: IngressPath
+metadata:

  • name: l7ingress
  • type: l7
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
  • pathMap:

@brendandburns https://github.com/brendandburns suggested including
subdomain in a single resource (something I'd planned to achieve via
joining). His reason makes sense, from a UX perspective having a single
place to manage the ingress points of an entire website is better.

I experimented a little with:
flat list

pathMap:
"foo.domain.tld/foo/"
service:
port:
"foo.domain.tld/bar/
"
service:
port:
"bar.domain.tld/foobar/*"
service:
port:

vs nested path matchers

pathMap:
"foo.domain.tld"
"/foo/":
service:
port:
"/bar/
":
service:
port:
"bar.domain.tld"
"/foobar/*":
service:
port:

I find the latter easier to think about. So I'm goint to convert this from
the map of path->backend that it currently is, to a nested map. In the
simple case this nested map will only have one subdomain (*). The host in
the spec will be used to validate that there's only a single domain per
ingress point.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37367090.

@smarterclayton

smarterclayton Aug 25, 2015

Contributor

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

On Aug 18, 2015, at 8:47 PM, Prashanth B notifications@github.com wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+* Security configuration (i.e tlsMode, secret)
+* Metadata needed for more advanced loadbalancing (e.g session persistence)
+
+See draft pr for an actual description of the resource. See alternatives for why this deviates from existing implementations in OpenShift or GCE:
+
+```yaml
+apiVersion: v1
+kind: IngressPath
+metadata:

  • name: l7ingress
  • type: l7
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
  • pathMap:

@brendandburns https://github.com/brendandburns suggested including
subdomain in a single resource (something I'd planned to achieve via
joining). His reason makes sense, from a UX perspective having a single
place to manage the ingress points of an entire website is better.

I experimented a little with:
flat list

pathMap:
"foo.domain.tld/foo/"
service:
port:
"foo.domain.tld/bar/
"
service:
port:
"bar.domain.tld/foobar/*"
service:
port:

vs nested path matchers

pathMap:
"foo.domain.tld"
"/foo/":
service:
port:
"/bar/
":
service:
port:
"bar.domain.tld"
"/foobar/*":
service:
port:

I find the latter easier to think about. So I'm goint to convert this from
the map of path->backend that it currently is, to a nested map. In the
simple case this nested map will only have one subdomain (*). The host in
the spec will be used to validate that there's only a single domain per
ingress point.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37367090.

This comment has been minimized.

@bprashanth

bprashanth Aug 25, 2015

Member

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

The way I currently have it, it's a map of lists, where the map is for the subdomain (https://github.com/kubernetes/kubernetes/pull/12827/files#diff-41f2bde570ebc813183b7cd0a96a7e04R137). You're not allowed to specify multiple domains per ingresspoint resource (which actually makes having the domain at the end there pretty redundant). If you want to request a host you'd leave that Spec.Host field blank, and one will be assigned and used with your subdomains.

It wouldn't be too hard to do this as a list of lists if you have a use case. I used a map because you can't have multiple path regex lists for the same subdomain.

@bprashanth

bprashanth Aug 25, 2015

Member

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

The way I currently have it, it's a map of lists, where the map is for the subdomain (https://github.com/kubernetes/kubernetes/pull/12827/files#diff-41f2bde570ebc813183b7cd0a96a7e04R137). You're not allowed to specify multiple domains per ingresspoint resource (which actually makes having the domain at the end there pretty redundant). If you want to request a host you'd leave that Spec.Host field blank, and one will be assigned and used with your subdomains.

It wouldn't be too hard to do this as a list of lists if you have a use case. I used a map because you can't have multiple path regex lists for the same subdomain.

This comment has been minimized.

@smarterclayton

smarterclayton Aug 25, 2015

Contributor

Ultimately we said that all public APIs shouldn't use maps
api-conventions.md
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#lists-of-named-subobjects-preferred-over-maps.

On Mon, Aug 24, 2015 at 11:10 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+* Security configuration (i.e tlsMode, secret)
+* Metadata needed for more advanced loadbalancing (e.g session persistence)
+
+See draft pr for an actual description of the resource. See alternatives for why this deviates from existing implementations in OpenShift or GCE:
+
+```yaml
+apiVersion: v1
+kind: IngressPath
+metadata:

  • name: l7ingress
  • type: l7
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
  • pathMap:

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

The way I currently have it, it's a map of lists, where the map is for the
subdomain (
https://github.com/kubernetes/kubernetes/pull/12827/files#diff-41f2bde570ebc813183b7cd0a96a7e04R137).
You're not allowed to specify multiple domains per ingresspoint resource
(which actually makes having the domain at the end there pretty redundant).
If you want to request a host you'd leave that Spec.Host field blank, and
one will be assigned and used with your subdomains.

It wouldn't be too hard to do this as a list of lists if you have a use
case. I used a map because you can't have multiple path regex lists for the
same subdomain.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37826454.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Aug 25, 2015

Contributor

Ultimately we said that all public APIs shouldn't use maps
api-conventions.md
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#lists-of-named-subobjects-preferred-over-maps.

On Mon, Aug 24, 2015 at 11:10 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+* Security configuration (i.e tlsMode, secret)
+* Metadata needed for more advanced loadbalancing (e.g session persistence)
+
+See draft pr for an actual description of the resource. See alternatives for why this deviates from existing implementations in OpenShift or GCE:
+
+```yaml
+apiVersion: v1
+kind: IngressPath
+metadata:

  • name: l7ingress
  • type: l7
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
  • pathMap:

Aren't maps verboten? Also, what if I don't want to allow users to pick a
domain, but give them one? Do they just fill out random values? Array
fixes both of those problems.

The way I currently have it, it's a map of lists, where the map is for the
subdomain (
https://github.com/kubernetes/kubernetes/pull/12827/files#diff-41f2bde570ebc813183b7cd0a96a7e04R137).
You're not allowed to specify multiple domains per ingresspoint resource
(which actually makes having the domain at the end there pretty redundant).
If you want to request a host you'd leave that Spec.Host field blank, and
one will be assigned and used with your subdomains.

It wouldn't be too hard to do this as a list of lists if you have a use
case. I used a map because you can't have multiple path regex lists for the
same subdomain.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r37826454.

Clayton Coleman | Lead Engineer, OpenShift

@bprashanth bprashanth referenced this pull request in kubernetes/contrib Aug 19, 2015

Closed

F5 LoadBalancer #12

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 24, 2015

GCE e2e build/test passed for commit f5ee32c.

+ secret: foosecret
+ pathMap:
+ "foo.example.com":
+ - url: "/foo/*"

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

imho, this is not really an url (no scheme locator), so path may be more appropriate here.
Also, does the trailing /* need to be explicitly defined? Is it a string or a regexp? Why not match based on the path components - making /foo equivalent to /foo/* and the regexp allows matching more use-cases (esoterically designed uri paths).

@ramr

ramr Aug 26, 2015

Member

imho, this is not really an url (no scheme locator), so path may be more appropriate here.
Also, does the trailing /* need to be explicitly defined? Is it a string or a regexp? Why not match based on the path components - making /foo equivalent to /foo/* and the regexp allows matching more use-cases (esoterically designed uri paths).

This comment has been minimized.

@bprashanth

bprashanth Aug 31, 2015

Member

Api server will run through some basic validation for the path regex, but otherwise it's upto the loadbalancer controller to deal with it. if it wants to make /foo == /foo/* it can.

Renaming url to path sgtm

@bprashanth

bprashanth Aug 31, 2015

Member

Api server will run through some basic validation for the path regex, but otherwise it's upto the loadbalancer controller to deal with it. if it wants to make /foo == /foo/* it can.

Renaming url to path sgtm

This comment has been minimized.

@a-robinson

a-robinson Sep 13, 2015

Member

This behavior is part of the API though, so it should at least be defined.

@a-robinson

a-robinson Sep 13, 2015

Member

This behavior is part of the API though, so it should at least be defined.

This comment has been minimized.

@bprashanth

bprashanth Sep 14, 2015

Member

Meaning the resource says: you specify a path regex and the loadbalancer handles it, right? the problem with validation here is if you establish gce as the baseline you're going to end up validating against a ton of features gce doesn't support but other loadbalancers do. Eitherways I consider validation an implementation detail.

@bprashanth

bprashanth Sep 14, 2015

Member

Meaning the resource says: you specify a path regex and the loadbalancer handles it, right? the problem with validation here is if you establish gce as the baseline you're going to end up validating against a ton of features gce doesn't support but other loadbalancers do. Eitherways I consider validation an implementation detail.

+ www.example.com -> |terminate ssl| 178.91.123.132 -> / foo s1
+ / bar s2
+ / foobar s3
+ ```

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

Hmm, if I understood this correctly, the 1 loadbalancer and 1 ip address might be somewhat of a contention from an HA and scale standpoint. For a smallish environment, this would work but it breaks down for a multi-tenant environment (unless you shard to make it work) or for an application at higher scale/load where one instance (and network interface - bare metal or virtual) might not be sufficient from a bandwith/throughput perspective.
Maybe call it an LB logically but the LB itself could be comprised of multiple instances (a bank of proxies/workers) - btw, it does imply multiple IPs as it could be a single instance with multiple interfaces as well.

@ramr

ramr Aug 26, 2015

Member

Hmm, if I understood this correctly, the 1 loadbalancer and 1 ip address might be somewhat of a contention from an HA and scale standpoint. For a smallish environment, this would work but it breaks down for a multi-tenant environment (unless you shard to make it work) or for an application at higher scale/load where one instance (and network interface - bare metal or virtual) might not be sufficient from a bandwith/throughput perspective.
Maybe call it an LB logically but the LB itself could be comprised of multiple instances (a bank of proxies/workers) - btw, it does imply multiple IPs as it could be a single instance with multiple interfaces as well.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

Still reading this doc in raw form - see that you mentioned for 1.1 here ... so ignore if its already addressed.

@ramr

ramr Aug 26, 2015

Member

Still reading this doc in raw form - see that you mentioned for 1.1 here ... so ignore if its already addressed.

This comment has been minimized.

@bprashanth

bprashanth Aug 31, 2015

Member

I'm trying to do the absolute minimum to have something useful for 1.1. In the long run I'm planning on doing something like your logical lb with claims and joining.

@bprashanth

bprashanth Aug 31, 2015

Member

I'm trying to do the absolute minimum to have something useful for 1.1. In the long run I'm planning on doing something like your logical lb with claims and joining.

+
+__Terminology__:
+* **Ingress points**: A resource representing a collection of inbound connections from the external network that would be satisfied by a load balancer. Similar to GCE's `UrlMaps` or OpenShift's `Routes`.
+* **Claim**: A resource that represents a claim on an IP address.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

Could claims also be used for ingress point names? Ala more than 1 request for "claiming-to-own" www.example.org - it could still be that usain bolt wins (first one in) but the others may get rejected/alloted another [aka a generated] name based on the whatever policies are in place.

@ramr

ramr Aug 26, 2015

Member

Could claims also be used for ingress point names? Ala more than 1 request for "claiming-to-own" www.example.org - it could still be that usain bolt wins (first one in) but the others may get rejected/alloted another [aka a generated] name based on the whatever policies are in place.

This comment has been minimized.

@bprashanth

bprashanth Aug 31, 2015

Member

Are the automatically allocated hostnames cluster private? Since public hostnames cost money we will probably need a way to claim just an ip (will they just fill in some garbage hostname in your model, to prevent a costly public dns allocation?). Also claims are meant to indicate which class of loadbalancer you want to use in a multi-lb environment, so you can still have example.com/test backed by nginx (class: gold) and example.com/prod backed by haproxy (class: silver).

For the 1.1 model, 2 ingresspoints can have example.com as host and different paths and they'll always get 2 ips. If they have overlapping paths, the loadbalancer controller will apply some policy to elect a winner (eg: longest prefix).

@bprashanth

bprashanth Aug 31, 2015

Member

Are the automatically allocated hostnames cluster private? Since public hostnames cost money we will probably need a way to claim just an ip (will they just fill in some garbage hostname in your model, to prevent a costly public dns allocation?). Also claims are meant to indicate which class of loadbalancer you want to use in a multi-lb environment, so you can still have example.com/test backed by nginx (class: gold) and example.com/prod backed by haproxy (class: silver).

For the 1.1 model, 2 ingresspoints can have example.com as host and different paths and they'll always get 2 ips. If they have overlapping paths, the loadbalancer controller will apply some policy to elect a winner (eg: longest prefix).

This comment has been minimized.

@ramr

ramr Sep 1, 2015

Member

Ah, k wasn't thinking specific to gce. Hmm, so were you thinking of some sort of DNS registration/record updates? Wasn't thinking of walking down that road [at least not for now] ...

One use-case could be hostnames for a privately managed cluster - with the host names restricted within a specific subdomain e.g. *.example.test and throw in a wildcard dns record into the mix. Optionally [sort of orthogonal here], those hostnames could also be automatically generated using that example.test suffix - ala: foo-bar.example.test.

@ramr

ramr Sep 1, 2015

Member

Ah, k wasn't thinking specific to gce. Hmm, so were you thinking of some sort of DNS registration/record updates? Wasn't thinking of walking down that road [at least not for now] ...

One use-case could be hostnames for a privately managed cluster - with the host names restricted within a specific subdomain e.g. *.example.test and throw in a wildcard dns record into the mix. Optionally [sort of orthogonal here], those hostnames could also be automatically generated using that example.test suffix - ala: foo-bar.example.test.

This comment has been minimized.

@bprashanth

bprashanth Sep 1, 2015

Member

For my own education, can you explain why one would want to couple hostname allocation and loadbalancing ingress? say I:
create a claim: {name: foo, host: *example.test}
create an ingresspoint: {claim: foo, host: left blank - populated to 123.example.test, pathmap: /foo -> svc1}

Now everything in my private cluster can access 123.example.test/foo to hit svc1? do you just have your cluster dns and your loadbalancer watching ingresspoints? If i want an ip for ingress from the outside internet, I just create a claim without a hostname and ignore the one generated?

I'm having trouble reconciling the concept of: this is how you get an ip to enter your cluster, and: this is how you get a cluster private hostname. Are they the same?

@bprashanth

bprashanth Sep 1, 2015

Member

For my own education, can you explain why one would want to couple hostname allocation and loadbalancing ingress? say I:
create a claim: {name: foo, host: *example.test}
create an ingresspoint: {claim: foo, host: left blank - populated to 123.example.test, pathmap: /foo -> svc1}

Now everything in my private cluster can access 123.example.test/foo to hit svc1? do you just have your cluster dns and your loadbalancer watching ingresspoints? If i want an ip for ingress from the outside internet, I just create a claim without a hostname and ignore the one generated?

I'm having trouble reconciling the concept of: this is how you get an ip to enter your cluster, and: this is how you get a cluster private hostname. Are they the same?

This comment has been minimized.

@ramr

ramr Sep 4, 2015

Member

So a bit simpler than that - the use case really would be for a hosting provider that
o Owns a wildcard DNS record for *.example.test
o Allows users to create your apps/services/pods on a k8s cluster.

Those user services (broader term not k8s services) are now made available internally and externally via that generated host names. The internal/external names could be different but don't see much point in that if its unique and the domain is anyway managed by the hosting provider. And then its just a matter of connecting / routing external traffic to the internally managed kubernetes cluster.

@ramr

ramr Sep 4, 2015

Member

So a bit simpler than that - the use case really would be for a hosting provider that
o Owns a wildcard DNS record for *.example.test
o Allows users to create your apps/services/pods on a k8s cluster.

Those user services (broader term not k8s services) are now made available internally and externally via that generated host names. The internal/external names could be different but don't see much point in that if its unique and the domain is anyway managed by the hosting provider. And then its just a matter of connecting / routing external traffic to the internally managed kubernetes cluster.

This comment has been minimized.

@bgrant0607

bgrant0607 Sep 17, 2015

Member

Re. internal vs. public DNS names: Sounds similar to cluster IPs and public IPs / external load balancers.

@bgrant0607

bgrant0607 Sep 17, 2015

Member

Re. internal vs. public DNS names: Sounds similar to cluster IPs and public IPs / external load balancers.

+This is an example ingress path. It encapsulates:
+* L7 proxying information (i.e map these urls to these backends)
+* Security configuration (i.e tlsMode, secret)
+* Metadata needed for more advanced loadbalancing (e.g session persistence)

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

yeah, would be good to have session affinity support.

@ramr

ramr Aug 26, 2015

Member

yeah, would be good to have session affinity support.

+
+// IngressPointStatus describes the current state of an ingressPoint.
+type IngressPointStatus struct {
+ Address string

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

Addresses and an array of strings maybe (handles the multiple IPs case)?

@ramr

ramr Aug 26, 2015

Member

Addresses and an array of strings maybe (handles the multiple IPs case)?

+
+In the context of 1.1 a cluster will only have one l7 loadbalancer controller that claims all IngressPoints so we don't need claims. In the larger scheme of things, we might still not need claims, however they solve the impedance mismatch problem (why do I need a loadbalancer to expose my service?) because that is how you get an ip. If you want to expose your service, you need a public ip for it. No matter where you're running you can get one by creating a claim. If you want something to use an ip you already have, create a claim for it.
+
+The downside is the user needs to create another resource. We can make this easier by defaulting claim-less ingress paths to a new claim.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

Wouldn't this mean an empty ingress point map - the map key is the claim value, right? We wouldn't be able to specify what service (backend) to talk to in that case. Or did you mean Url (really just the path is the claim) and the host has to be always specified?

@ramr

ramr Aug 26, 2015

Member

Wouldn't this mean an empty ingress point map - the map key is the claim value, right? We wouldn't be able to specify what service (backend) to talk to in that case. Or did you mean Url (really just the path is the claim) and the host has to be always specified?

This comment has been minimized.

@bprashanth

bprashanth Aug 31, 2015

Member

The claim is a name of the claim resource, it isn't in the current IngressPoint resource because I'm not planning to do it for 1.1. I want to be able to allocated different loadbalancers for different paths of the same hostname, so it needs some fleshing out after the initial design.

@bprashanth

bprashanth Aug 31, 2015

Member

The claim is a name of the claim resource, it isn't in the current IngressPoint resource because I'm not planning to do it for 1.1. I want to be able to allocated different loadbalancers for different paths of the same hostname, so it needs some fleshing out after the initial design.

+
+For 1.1, there will only be one loadbalancer controller that claims all ingress paths. Whether it creates a new loadbalancer or reuses an existing one is upto the controller. For example, the GCE controller will create 1 loadbalancer per ingress path, if it has `type: network`, this will be an l4, if it has `type: application` it will be an l7 with rules to allow proxying to different backends based on the given pathmap.
+
+However we need to provide a way for users to specify which class of loadbalancers to join, and which particular loadbalancer (in the case of multiple loadbalancer per class). Note that we can't just say each class of loadbalancers has a single loadbalancer in it, and so picking a class boils down to picking a specific loadbalancer, because of cloud provider limitations. Nor can we say picking a specific loadbalancer picks the class, because a loadbalancer might have limited capacity (i.e don't force a user to pick 172.12, which might be maxed out, when they just want class=gce).

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

An alternative may be to allow users can provide some hints as to capacity requirements [ala bandwidth, concurrent-sessions, SLA-level as an example] with the ingress path and the system could best fit the class/LB to use - could do this via annotations as well potentially. Note, this is not going to solve all cases as it could be a case of a decision made at a point-in-time when say the balancer had capacity and the reality at some future time might be the cup runneth over!! But that could well be addressed by increasing capacity.

@ramr

ramr Aug 26, 2015

Member

An alternative may be to allow users can provide some hints as to capacity requirements [ala bandwidth, concurrent-sessions, SLA-level as an example] with the ingress path and the system could best fit the class/LB to use - could do this via annotations as well potentially. Note, this is not going to solve all cases as it could be a case of a decision made at a point-in-time when say the balancer had capacity and the reality at some future time might be the cup runneth over!! But that could well be addressed by increasing capacity.

This comment has been minimized.

@bprashanth

bprashanth Aug 31, 2015

Member

Yeah, we could schedule to the appropriate loadbalancer based on resource requests. good point.

@bprashanth

bprashanth Aug 31, 2015

Member

Yeah, we could schedule to the appropriate loadbalancer based on resource requests. good point.

+* Remove tlsMode and create a new type of secret(s).
+* Don't use a secret, embed the TLSConfig in the IngressPoint.
+
+Regardless of how we implement it, we should probably treat tlsmode as a request and not a directive. Even if the user provides a secret the admin may choose to not expose it, or have a policy that controls that.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

👍

@ramr

ramr Aug 26, 2015

Member

👍

+
+### How does public DNS work?
+
+Most of this is TDB. With the current model, if someone specifies a hostname the loadbalancer controllers assume they know what they're doing. If there are 2 IngressPoints with the same hostname and url endpoints, one of them will win. This is just like the overlapping labels rc problem. Since this is ultimately a policy and security decision that admins make, having replaceable/plugin controllers with each one choosing a policy might suffice.

This comment has been minimized.

@ramr

ramr Aug 26, 2015

Member

That would work - or it could be a claim-like model maybe? Having it managed via policies/plugins is good as I can think of certain environments (ala multi-tenant hosting), where one might not want the same hostname to be claimed across different ingress points.

@ramr

ramr Aug 26, 2015

Member

That would work - or it could be a claim-like model maybe? Having it managed via policies/plugins is good as I can think of certain environments (ala multi-tenant hosting), where one might not want the same hostname to be claimed across different ingress points.

This comment has been minimized.

@smarterclayton

smarterclayton Sep 2, 2015

Contributor

We should formalize two models. One, the admin chooses which items to pull in directly (via labels, namespace selection, or manual criteria). Two, the user creates the route and someone attempts to bind it someplace where it can be unique.

I'm starting to move the OpenShift router into the direction of each router functioning like a kubelet - updating the route with information about the DNS name the router chose to expose for the service. A user who wants a specific DNS name may have to live with not getting it, or getting a second one. Either way, the router will report that alongside the route.

@smarterclayton

smarterclayton Sep 2, 2015

Contributor

We should formalize two models. One, the admin chooses which items to pull in directly (via labels, namespace selection, or manual criteria). Two, the user creates the route and someone attempts to bind it someplace where it can be unique.

I'm starting to move the OpenShift router into the direction of each router functioning like a kubelet - updating the route with information about the DNS name the router chose to expose for the service. A user who wants a specific DNS name may have to live with not getting it, or getting a second one. Either way, the router will report that alongside the route.

This comment has been minimized.

@bprashanth

bprashanth Sep 8, 2015

Member

I'm starting to move the OpenShift router into the direction of each router functioning like a kubelet - updating the route with information about the DNS name the router chose to expose for the service. A user who wants a specific DNS name may have to live with not getting it, or getting a second one. Either way, the router will report that alongside the route.

Correct me if this doesn't translate to: If you specify a name the routelet assumes you own it, otherwise it assumes you don't really care and makes a best effort.

@bprashanth

bprashanth Sep 8, 2015

Member

I'm starting to move the OpenShift router into the direction of each router functioning like a kubelet - updating the route with information about the DNS name the router chose to expose for the service. A user who wants a specific DNS name may have to live with not getting it, or getting a second one. Either way, the router will report that alongside the route.

Correct me if this doesn't translate to: If you specify a name the routelet assumes you own it, otherwise it assumes you don't really care and makes a best effort.

This comment has been minimized.

@bprashanth

bprashanth Sep 8, 2015

Member

Actually it doesn't, in your model I can ask for a dns name and get another one. That's a little confusing, why not just leave the dns name untouched or fail? (sort of like we're doing with loadbalancer ip #13005, or like service external ip)

@bprashanth

bprashanth Sep 8, 2015

Member

Actually it doesn't, in your model I can ask for a dns name and get another one. That's a little confusing, why not just leave the dns name untouched or fail? (sort of like we're doing with loadbalancer ip #13005, or like service external ip)

This comment has been minimized.

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Yes, and it also handles duplicates (oldest route wins, and overlapping
paths in the same namespace are ok, but not ok across namespaces). I think
it's generally useful for a routelet that has a wildcard DNS name assigned
to it to always assign a generated name, and also optionally assign a host
name.

On Tue, Sep 8, 2015 at 11:42 AM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+Each IngressPoint specifies a single secret and a tls mode. Its upto the loadbalancer controller to handle the tls mode. For example, the gce controller will only support TLS termination for 1.1 because each IngressPoint gets a new loadbalancer, and GCE doesn't allow multiple certs per IP. With the current design a user can create the wrong type of secret for the wrong mode. They wouldn't know till they try creating the Ingress path. Validating a simple key:value secret is also hard.
+
+Some alternatives to the proposed tls mode handling:
+* Remove tlsMode and create a new type of secret(s).
+* Don't use a secret, embed the TLSConfig in the IngressPoint.
+
+Regardless of how we implement it, we should probably treat tlsmode as a request and not a directive. Even if the user provides a secret the admin may choose to not expose it, or have a policy that controls that.
+
+### What triggers the creation of a new loadbalancer controller, should this be exposed to users?
+
+In the current proposal, the kube-controller-manager is started up with a --loadbalancer-controllers flag that dictates which controllers to startup. This is obviously sub-optimal. A better solution would be --loadbalancer-controllers=gce-lbc-pod,haproxy-lbc-pod..., then this just boils down to creating some rcs. Taking this a step further, we could have a resource to allow dynamically adding more loadbalancer controllers to the cluster.
+
+### How does public DNS work?
+
+Most of this is TDB. With the current model, if someone specifies a hostname the loadbalancer controllers assume they know what they're doing. If there are 2 IngressPoints with the same hostname and url endpoints, one of them will win. This is just like the overlapping labels rc problem. Since this is ultimately a policy and security decision that admins make, having replaceable/plugin controllers with each one choosing a policy might suffice.

I'm starting to move the OpenShift router into the direction of each
router functioning like a kubelet - updating the route with information
about the DNS name the router chose to expose for the service. A user who
wants a specific DNS name may have to live with not getting it, or getting
a second one. Either way, the router will report that alongside the route.

Correct me if this doesn't translate to: If you specify a name the
routelet assumes you own it, otherwise it assumes you don't really care and
makes a best effort.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38941504.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Yes, and it also handles duplicates (oldest route wins, and overlapping
paths in the same namespace are ok, but not ok across namespaces). I think
it's generally useful for a routelet that has a wildcard DNS name assigned
to it to always assign a generated name, and also optionally assign a host
name.

On Tue, Sep 8, 2015 at 11:42 AM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+Each IngressPoint specifies a single secret and a tls mode. Its upto the loadbalancer controller to handle the tls mode. For example, the gce controller will only support TLS termination for 1.1 because each IngressPoint gets a new loadbalancer, and GCE doesn't allow multiple certs per IP. With the current design a user can create the wrong type of secret for the wrong mode. They wouldn't know till they try creating the Ingress path. Validating a simple key:value secret is also hard.
+
+Some alternatives to the proposed tls mode handling:
+* Remove tlsMode and create a new type of secret(s).
+* Don't use a secret, embed the TLSConfig in the IngressPoint.
+
+Regardless of how we implement it, we should probably treat tlsmode as a request and not a directive. Even if the user provides a secret the admin may choose to not expose it, or have a policy that controls that.
+
+### What triggers the creation of a new loadbalancer controller, should this be exposed to users?
+
+In the current proposal, the kube-controller-manager is started up with a --loadbalancer-controllers flag that dictates which controllers to startup. This is obviously sub-optimal. A better solution would be --loadbalancer-controllers=gce-lbc-pod,haproxy-lbc-pod..., then this just boils down to creating some rcs. Taking this a step further, we could have a resource to allow dynamically adding more loadbalancer controllers to the cluster.
+
+### How does public DNS work?
+
+Most of this is TDB. With the current model, if someone specifies a hostname the loadbalancer controllers assume they know what they're doing. If there are 2 IngressPoints with the same hostname and url endpoints, one of them will win. This is just like the overlapping labels rc problem. Since this is ultimately a policy and security decision that admins make, having replaceable/plugin controllers with each one choosing a policy might suffice.

I'm starting to move the OpenShift router into the direction of each
router functioning like a kubelet - updating the route with information
about the DNS name the router chose to expose for the service. A user who
wants a specific DNS name may have to live with not getting it, or getting
a second one. Either way, the router will report that alongside the route.

Correct me if this doesn't translate to: If you specify a name the
routelet assumes you own it, otherwise it assumes you don't really care and
makes a best effort.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38941504.

Clayton Coleman | Lead Engineer, OpenShift

This comment has been minimized.

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Actually it doesn't, in your model I can ask for a dns name and get another one

spec.host in our model is a request to take www.example.com. If you can't get it, your status would be a condition / reason combo "Accepted: False", "Reason: DuplicateHostName". The router is free to give you one or more DNS names that are never duplicate, that you won't know up front. I don't want that logic to be in the apiserver, I want it to be in the routelet, but yes we would indicate the host couldn't be assigned.

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Actually it doesn't, in your model I can ask for a dns name and get another one

spec.host in our model is a request to take www.example.com. If you can't get it, your status would be a condition / reason combo "Accepted: False", "Reason: DuplicateHostName". The router is free to give you one or more DNS names that are never duplicate, that you won't know up front. I don't want that logic to be in the apiserver, I want it to be in the routelet, but yes we would indicate the host couldn't be assigned.

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Aug 27, 2015

Collaborator

Labelling this PR as size/L

Collaborator

k8s-merge-robot commented Aug 27, 2015

Labelling this PR as size/L

+kind: IngressPoint
+metadata:
+ name: l7ingress
+ type: l7

This comment has been minimized.

@thockin

thockin Sep 8, 2015

Member

type is not part of standard metadata

@thockin

thockin Sep 8, 2015

Member

type is not part of standard metadata

This comment has been minimized.

@bprashanth

bprashanth Sep 8, 2015

Member

Point noted. Current implementation doesn't include type because an ingress point is always type=l7. When we have l4 and l7 ingress points we'll have to record type, probably in the spec.

@bprashanth

bprashanth Sep 8, 2015

Member

Point noted. Current implementation doesn't include type because an ingress point is always type=l7. When we have l4 and l7 ingress points we'll have to record type, probably in the spec.

+ name: l4ingress
+ type: network
+spec:
+ host: www.example.com

This comment has been minimized.

@thockin

thockin Sep 8, 2015

Member

What does host, tlsMode, secret mean for L4? I don't quite understand - we don't do any of that today.

@thockin

thockin Sep 8, 2015

Member

What does host, tlsMode, secret mean for L4? I don't quite understand - we don't do any of that today.

This comment has been minimized.

@bprashanth

bprashanth Sep 8, 2015

Member

On GCE they would just default to passthrough and no secret, because it doesn't really support anything else.

I can have haproxy choose different backends based on hostname via sni switching at l4 (http://blog.haproxy.com/2012/04/13/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/), just to show that it isn't totally pointless to include tlsmode, hostname, secret there. This is obviously an adavanced use case.

@bprashanth

bprashanth Sep 8, 2015

Member

On GCE they would just default to passthrough and no secret, because it doesn't really support anything else.

I can have haproxy choose different backends based on hostname via sni switching at l4 (http://blog.haproxy.com/2012/04/13/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/), just to show that it isn't totally pointless to include tlsmode, hostname, secret there. This is obviously an adavanced use case.

This comment has been minimized.

@thockin

thockin Sep 9, 2015

Member

Interesting. I just want to be wary of features that don't have wide support.

@thockin

thockin Sep 9, 2015

Member

Interesting. I just want to be wary of features that don't have wide support.

This comment has been minimized.

@bprashanth

bprashanth Sep 9, 2015

Member

I think what's important at L4 that we don't have today is associating multiple services to a single ip, and different ports. So if we said, the claim is just an ip:port, if you don't specify either you'll get something random, if you specify both it might get rejected, we can have the service reference the claim directly and move type into the claim. The service can still exist if the claim is rejected.

Lets us defer the l4 ingress point till someone actually asks for it.

@bprashanth

bprashanth Sep 9, 2015

Member

I think what's important at L4 that we don't have today is associating multiple services to a single ip, and different ports. So if we said, the claim is just an ip:port, if you don't specify either you'll get something random, if you specify both it might get rejected, we can have the service reference the claim directly and move type into the claim. The service can still exist if the claim is rejected.

Lets us defer the l4 ingress point till someone actually asks for it.

This comment has been minimized.

@thockin

thockin Sep 9, 2015

Member

Agree we should not focus on L4 right now, but we should have a plausible
plan to converge.

What's next for this PR? Do you want to revamp it, or should we shift to
the prototype PR for review?

On Tue, Sep 8, 2015 at 5:29 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+- Latency
+- Throughput of loadbalancer is the bottleneck
+- Cloud provider limitations
+- Want SSL passthrough or SNI switch for multi-certs
+- Need client IP and cannot set loadbalancer proxy as default gateway
+
+Point being, we can't reliably infer all these use case with Service.Type=LoadBalancer or the absence of a UrlMap in the IngressPoint (because they might want L7 for session persistence). Instead, anything serving ingress traffic needs an ingress path (which is why it isn't called urlMap). An L4 ingress path might look like:
+
+```yaml
+apiVersion: v1
+kind: IngressPoint
+metadata:

I think what's important at L4 that we don't have today is associating
multiple services to a single ip, and different ports. So if we said, the
claim is just an ip:port, if you don't specify either you'll get something
random, if you specify both it might get rejected, we can have the service
reference the claim directly and move type into the claim. The service
can still exist if the claim is rejected.

Lets us defer the l4 ingress point till someone actually asks for it.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38996555.

@thockin

thockin Sep 9, 2015

Member

Agree we should not focus on L4 right now, but we should have a plausible
plan to converge.

What's next for this PR? Do you want to revamp it, or should we shift to
the prototype PR for review?

On Tue, Sep 8, 2015 at 5:29 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+- Latency
+- Throughput of loadbalancer is the bottleneck
+- Cloud provider limitations
+- Want SSL passthrough or SNI switch for multi-certs
+- Need client IP and cannot set loadbalancer proxy as default gateway
+
+Point being, we can't reliably infer all these use case with Service.Type=LoadBalancer or the absence of a UrlMap in the IngressPoint (because they might want L7 for session persistence). Instead, anything serving ingress traffic needs an ingress path (which is why it isn't called urlMap). An L4 ingress path might look like:
+
+```yaml
+apiVersion: v1
+kind: IngressPoint
+metadata:

I think what's important at L4 that we don't have today is associating
multiple services to a single ip, and different ports. So if we said, the
claim is just an ip:port, if you don't specify either you'll get something
random, if you specify both it might get rejected, we can have the service
reference the claim directly and move type into the claim. The service
can still exist if the claim is rejected.

Lets us defer the l4 ingress point till someone actually asks for it.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38996555.

This comment has been minimized.

@bprashanth

bprashanth Sep 9, 2015

Member

Let's go to the prototype. In the meanwhile i'll split just the resource out, add validation, tests etc and upload a non-wip. I'm planning to do this in 3 chunks that roughly translate to the commits in the prototype.

@bprashanth

bprashanth Sep 9, 2015

Member

Let's go to the prototype. In the meanwhile i'll split just the resource out, add validation, tests etc and upload a non-wip. I'm planning to do this in 3 chunks that roughly translate to the commits in the prototype.

This comment has been minimized.

@smarterclayton

smarterclayton Sep 9, 2015

Contributor

L4 via sni is the most requested ingress feature we get from enterprises looking to deploy sets of apps. Until L4 is part of ingress we can't really adapt our routes to the model. So whether we implement it in the prototype, we should be sure to include it in the discussion.

@smarterclayton

smarterclayton Sep 9, 2015

Contributor

L4 via sni is the most requested ingress feature we get from enterprises looking to deploy sets of apps. Until L4 is part of ingress we can't really adapt our routes to the model. So whether we implement it in the prototype, we should be sure to include it in the discussion.

+```
+
+type: network with a pathmap is a validation error.
+This adds a slight burden on users, since what was previously a one line change to expose a service now requires a new resource.

This comment has been minimized.

@thockin

thockin Sep 8, 2015

Member

For compat reasons and practical reasons we have to continue to support the existing API. Do you expect this is a special case or that type=LoadBalancer will imply an IngressPoint of a particular flavor?

@thockin

thockin Sep 8, 2015

Member

For compat reasons and practical reasons we have to continue to support the existing API. Do you expect this is a special case or that type=LoadBalancer will imply an IngressPoint of a particular flavor?

This comment has been minimized.

@bprashanth

bprashanth Sep 8, 2015

Member

For now, I'm thinking of l7 and l4 independently. L4 will be through service type, l7 through ingress point (because you need a pathmap for l7 anyway).

@bprashanth

bprashanth Sep 8, 2015

Member

For now, I'm thinking of l7 and l4 independently. L4 will be through service type, l7 through ingress point (because you need a pathmap for l7 anyway).

This comment has been minimized.

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Or you at least need a port map for L7 from a single or set of hosts. In
either way it's a map.

On Tue, Sep 8, 2015 at 2:48 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+kind: IngressPoint
+metadata:

  • name: l4ingress
  • type: network
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
    +status:
  • loadBalancer:
  • ingress:
    • ip: 104.1..
      +```

+type: network with a pathmap is a validation error.
+This adds a slight burden on users, since what was previously a one line change to expose a service now requires a new resource.

For now, I'm thinking of l7 and l4 independently. L4 will be through
service type, l7 through ingress point (because you need a pathmap for l7
anyway).


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38964334.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Sep 8, 2015

Contributor

Or you at least need a port map for L7 from a single or set of hosts. In
either way it's a map.

On Tue, Sep 8, 2015 at 2:48 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+kind: IngressPoint
+metadata:

  • name: l4ingress
  • type: network
    +spec:
  • host: www.example.com
  • tlsMode: Termination
  • secret: foosecret
    +status:
  • loadBalancer:
  • ingress:
    • ip: 104.1..
      +```

+type: network with a pathmap is a validation error.
+This adds a slight burden on users, since what was previously a one line change to expose a service now requires a new resource.

For now, I'm thinking of l7 and l4 independently. L4 will be through
service type, l7 through ingress point (because you need a pathmap for l7
anyway).


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r38964334.

Clayton Coleman | Lead Engineer, OpenShift

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 8, 2015

Member

I am explicitly NOT doing an API review at this point. There are a lot of API
conventions we have in place and I don't want to litter this with a hundred
nit-picky comments on field names and such. I am doing this as a design review
ONLY.

earlier you said multiple hostnames per balancer were out of scope, but you are
showing multiple hosts in your yaml. Maybe show v1.1 yaml and "hypothetical
eventual" yaml. I don't want to rework the doc endlessly, I just want it to a)
reflect plans and b) not mislead.

Doc scrub: you seem to use 'IngressPoint' and 'ingress path' interchangeably.
Please don't :)

Overall I think this works. I think there's a lot of detail to fine-tune and we need to solidify the compat story and help people transition. We also need to make sure we are DEAD CLEAR on which components and actors are responsible for which actions and that we have a plausible alignment with storage.

Member

thockin commented Sep 8, 2015

I am explicitly NOT doing an API review at this point. There are a lot of API
conventions we have in place and I don't want to litter this with a hundred
nit-picky comments on field names and such. I am doing this as a design review
ONLY.

earlier you said multiple hostnames per balancer were out of scope, but you are
showing multiple hosts in your yaml. Maybe show v1.1 yaml and "hypothetical
eventual" yaml. I don't want to rework the doc endlessly, I just want it to a)
reflect plans and b) not mislead.

Doc scrub: you seem to use 'IngressPoint' and 'ingress path' interchangeably.
Please don't :)

Overall I think this works. I think there's a lot of detail to fine-tune and we need to solidify the compat story and help people transition. We also need to make sure we are DEAD CLEAR on which components and actors are responsible for which actions and that we have a plausible alignment with storage.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Sep 8, 2015

Contributor

"alignment with storage" because?

On Tue, Sep 8, 2015 at 1:11 PM, Tim Hockin notifications@github.com wrote:

I am explicitly NOT doing an API review at this point. There are a lot of
API
conventions we have in place and I don't want to litter this with a
hundred
nit-picky comments on field names and such. I am doing this as a design
review
ONLY.

earlier you said multiple hostnames per balancer were out of scope, but
you are
showing multiple hosts in your yaml. Maybe show v1.1 yaml and
"hypothetical
eventual" yaml. I don't want to rework the doc endlessly, I just want it
to a)
reflect plans and b) not mislead.

Doc scrub: you seem to use 'IngressPoint' and 'ingress path'
interchangeably.
Please don't :)

Overall I think this works. I think there's a lot of detail to fine-tune
and we need to solidify the compat story and help people transition. We
also need to make sure we are DEAD CLEAR on which components and actors are
responsible for which actions and that we have a plausible alignment with
storage.


Reply to this email directly or view it on GitHub
#12827 (comment)
.

Clayton Coleman | Lead Engineer, OpenShift

Contributor

smarterclayton commented Sep 8, 2015

"alignment with storage" because?

On Tue, Sep 8, 2015 at 1:11 PM, Tim Hockin notifications@github.com wrote:

I am explicitly NOT doing an API review at this point. There are a lot of
API
conventions we have in place and I don't want to litter this with a
hundred
nit-picky comments on field names and such. I am doing this as a design
review
ONLY.

earlier you said multiple hostnames per balancer were out of scope, but
you are
showing multiple hosts in your yaml. Maybe show v1.1 yaml and
"hypothetical
eventual" yaml. I don't want to rework the doc endlessly, I just want it
to a)
reflect plans and b) not mislead.

Doc scrub: you seem to use 'IngressPoint' and 'ingress path'
interchangeably.
Please don't :)

Overall I think this works. I think there's a lot of detail to fine-tune
and we need to solidify the compat story and help people transition. We
also need to make sure we are DEAD CLEAR on which components and actors are
responsible for which actions and that we have a plausible alignment with
storage.


Reply to this email directly or view it on GitHub
#12827 (comment)
.

Clayton Coleman | Lead Engineer, OpenShift

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 9, 2015

Member

"alignment with storage" because?

Becuase I see a lot of common pattern between storage and networking
here, and I want to make that conscious and consistent where possible.

Generically, we seem to be trending towards something like:

                         +----------+
          +--create----> | producer |
          |              +----------+
     +----+----+               ^
     | manager |---bind------> :
     +----+----+               v
          |              +----------+
          +---watch----> |  claim   | <--+
                         +----------+    |
                               ^         |
                               |         +---- user-created
                               |         |
                         +----------+    |
                         | consumer | <--+
                         +----------+

Storage in particular is:

                       +----------+
        +--create----> |    PV    | <------ auto-provisioned
        |              +----------+         or admin-created
   +----+----+               ^
   | PVCtrlr |---bind------> :
   +----+----+               v
        |              +----------+
        +---watch----> | PVClaim  | <--+
                       +----------+    |
                             ^         |
                             |         +---- user-created
                             |         |
                       +----------+    |
                       |   pod    | <--+
                       +----------+

But LB is:

                       +----------+
        +--create----> |    LB    | <------- abstract
        |              +----------+
   +----+----+               ^
   | LBCtrlr |               :
   +----+----+               :
        |              +-----+----+
        +---watch----> | ingress  | <------- user-created
                       +----------+

If Claims appear in LB space, you start to see the pattern.

There are a number of open questions about the overall model that we
should discuss wrt commonality, but perhaps not in this PR.

                       +----------+
        +--create----> |    LB    | <------ auto-provisioned
        |              +----------+
   +----+----+               ^
   | LBCtrlr |---bind------> :
   +----+----+               v
        |              +----------+
        +---watch----> | IngClaim | <--+
                       +----------+    |
                             ^         |
                             |         +---- user-created
                             |         |
                       +----------+    |
                       | ingress  | <--+
                       +----------+
Member

thockin commented Sep 9, 2015

"alignment with storage" because?

Becuase I see a lot of common pattern between storage and networking
here, and I want to make that conscious and consistent where possible.

Generically, we seem to be trending towards something like:

                         +----------+
          +--create----> | producer |
          |              +----------+
     +----+----+               ^
     | manager |---bind------> :
     +----+----+               v
          |              +----------+
          +---watch----> |  claim   | <--+
                         +----------+    |
                               ^         |
                               |         +---- user-created
                               |         |
                         +----------+    |
                         | consumer | <--+
                         +----------+

Storage in particular is:

                       +----------+
        +--create----> |    PV    | <------ auto-provisioned
        |              +----------+         or admin-created
   +----+----+               ^
   | PVCtrlr |---bind------> :
   +----+----+               v
        |              +----------+
        +---watch----> | PVClaim  | <--+
                       +----------+    |
                             ^         |
                             |         +---- user-created
                             |         |
                       +----------+    |
                       |   pod    | <--+
                       +----------+

But LB is:

                       +----------+
        +--create----> |    LB    | <------- abstract
        |              +----------+
   +----+----+               ^
   | LBCtrlr |               :
   +----+----+               :
        |              +-----+----+
        +---watch----> | ingress  | <------- user-created
                       +----------+

If Claims appear in LB space, you start to see the pattern.

There are a number of open questions about the overall model that we
should discuss wrt commonality, but perhaps not in this PR.

                       +----------+
        +--create----> |    LB    | <------ auto-provisioned
        |              +----------+
   +----+----+               ^
   | LBCtrlr |---bind------> :
   +----+----+               v
        |              +----------+
        +---watch----> | IngClaim | <--+
                       +----------+    |
                             ^         |
                             |         +---- user-created
                             |         |
                       +----------+    |
                       | ingress  | <--+
                       +----------+
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Sep 9, 2015

Contributor

On Sep 8, 2015, at 8:27 PM, Tim Hockin notifications@github.com wrote:

"alignment with storage" because?

Becuase I see a lot of common pattern between storage and networking
here, and I want to make that conscious and consistent where possible.

Generically, we seem to be trending towards something like:

+----------+
+--create----> | producer |
| +----------+
+----+----+ ^
| manager |---bind------> :
+----+----+ v
| +----------+
+---watch----> | claim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| consumer | <--+
+----------+

Storage in particular is:

+----------+
+--create----> | PV | <------ auto-provisioned
| +----------+ or admin-created
+----+----+ ^
| PVCtrlr |---bind------> :
+----+----+ v
| +----------+
+---watch----> | PVClaim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| pod | <--+
+----------+

But LB is:

+----------+
+--create----> | LB | <------- abstract
| +----------+
+----+----+ ^
| LBCtrlr | :
+----+----+ :
| +-----+----+
+---watch----> | ingress | <------- user-created
+----------+

If Claims appear in LB space, you start to see the pattern.

We are not as interested in exclusive claims here - instead, I think we
expect there to be multiple LB spaces internally with different rule sets
and exposure dimensions. The direction we're accelerating in is more
flexibility in the LB backends, more of a request model for most ingress,
and more specification of what each ingress means. Instead of being very
literal "I want an LB for this service", it's "I want to be exposed using
these characteristics, and oh by the way my admin told me to set these
annotations and define these labels".

There are a number of open questions about the overall model that we
should discuss wrt commonality, but perhaps not in this PR.

+----------+
+--create----> | LB | <------ auto-provisioned
| +----------+
+----+----+ ^
| LBCtrlr |---bind------> :
+----+----+ v
| +----------+
+---watch----> | IngClaim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| ingress | <--+
+----------+


Reply to this email directly or view it on GitHub
#12827 (comment)
.

Contributor

smarterclayton commented Sep 9, 2015

On Sep 8, 2015, at 8:27 PM, Tim Hockin notifications@github.com wrote:

"alignment with storage" because?

Becuase I see a lot of common pattern between storage and networking
here, and I want to make that conscious and consistent where possible.

Generically, we seem to be trending towards something like:

+----------+
+--create----> | producer |
| +----------+
+----+----+ ^
| manager |---bind------> :
+----+----+ v
| +----------+
+---watch----> | claim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| consumer | <--+
+----------+

Storage in particular is:

+----------+
+--create----> | PV | <------ auto-provisioned
| +----------+ or admin-created
+----+----+ ^
| PVCtrlr |---bind------> :
+----+----+ v
| +----------+
+---watch----> | PVClaim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| pod | <--+
+----------+

But LB is:

+----------+
+--create----> | LB | <------- abstract
| +----------+
+----+----+ ^
| LBCtrlr | :
+----+----+ :
| +-----+----+
+---watch----> | ingress | <------- user-created
+----------+

If Claims appear in LB space, you start to see the pattern.

We are not as interested in exclusive claims here - instead, I think we
expect there to be multiple LB spaces internally with different rule sets
and exposure dimensions. The direction we're accelerating in is more
flexibility in the LB backends, more of a request model for most ingress,
and more specification of what each ingress means. Instead of being very
literal "I want an LB for this service", it's "I want to be exposed using
these characteristics, and oh by the way my admin told me to set these
annotations and define these labels".

There are a number of open questions about the overall model that we
should discuss wrt commonality, but perhaps not in this PR.

+----------+
+--create----> | LB | <------ auto-provisioned
| +----------+
+----+----+ ^
| LBCtrlr |---bind------> :
+----+----+ v
| +----------+
+---watch----> | IngClaim | <--+
+----------+ |
^ |
| +---- user-created
| |
+----------+ |
| ingress | <--+
+----------+


Reply to this email directly or view it on GitHub
#12827 (comment)
.

+type ServiceRef struct {
+ Name string
+ Namespace string
+ Port int64

This comment has been minimized.

@smarterclayton

smarterclayton Sep 9, 2015

Contributor

Can you add a commend indicating which port this is - is this the named port on the service (so that you have to lookup the service to know what the endpoint port is) or is it the endpoint port in the endpoints list (the output)?

@smarterclayton

smarterclayton Sep 9, 2015

Contributor

Can you add a commend indicating which port this is - is this the named port on the service (so that you have to lookup the service to know what the endpoint port is) or is it the endpoint port in the endpoints list (the output)?

This comment has been minimized.

@a-robinson

a-robinson Sep 13, 2015

Member

Right, shouldn't this be an IntOrString to support either numeric or named ports?

Also, is Port required if the service only has one port?

@a-robinson

a-robinson Sep 13, 2015

Member

Right, shouldn't this be an IntOrString to support either numeric or named ports?

Also, is Port required if the service only has one port?

This comment has been minimized.

@smarterclayton

smarterclayton Sep 13, 2015

Contributor

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

The numbers of ports is potentially multiple, but for now requiring
separate ingress rules is not burdensome (especially when load balancers
can distinguish between which ingress they wish to serve).

On Sep 13, 2015, at 12:15 AM, Alex Robinson notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}
    +
    +// IngressPointStatus describes the current state of an ingressPoint.
    +type IngressPointStatus struct {
  • Address string
    +}
    +
    +// ServiceRef is a reference to a single service:port.
    +type ServiceRef struct {
  • Name string
  • Namespace string
  • Port int64

Right, shouldn't this be an IntOrString to support either numeric or named
ports?

Also, is Port required if the service only has one port?


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39344298.

@smarterclayton

smarterclayton Sep 13, 2015

Contributor

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

The numbers of ports is potentially multiple, but for now requiring
separate ingress rules is not burdensome (especially when load balancers
can distinguish between which ingress they wish to serve).

On Sep 13, 2015, at 12:15 AM, Alex Robinson notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}
    +
    +// IngressPointStatus describes the current state of an ingressPoint.
    +type IngressPointStatus struct {
  • Address string
    +}
    +
    +// ServiceRef is a reference to a single service:port.
    +type ServiceRef struct {
  • Name string
  • Namespace string
  • Port int64

Right, shouldn't this be an IntOrString to support either numeric or named
ports?

Also, is Port required if the service only has one port?


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39344298.

This comment has been minimized.

@bprashanth

bprashanth Sep 14, 2015

Member

Can you add a commend indicating which port this is - is this the named port on the service (so that you have to lookup the service to know what the endpoint port is) or is it the endpoint port in the endpoints list (the output)?

It's the service port.

Right, shouldn't this be an IntOrString to support either numeric or named ports?
Also, is Port required if the service only has one port?

no and yes. It's https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1234.

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

I think it's way more intuitive to create a service and plug that port into the ingress point. One shouldn't need to lookup the endpoints, the context of this entire resource is service centric. Feel free to argue, these are just my thoughts.

@bprashanth

bprashanth Sep 14, 2015

Member

Can you add a commend indicating which port this is - is this the named port on the service (so that you have to lookup the service to know what the endpoint port is) or is it the endpoint port in the endpoints list (the output)?

It's the service port.

Right, shouldn't this be an IntOrString to support either numeric or named ports?
Also, is Port required if the service only has one port?

no and yes. It's https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1234.

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

I think it's way more intuitive to create a service and plug that port into the ingress point. One shouldn't need to lookup the endpoints, the context of this entire resource is service centric. Feel free to argue, these are just my thoughts.

This comment has been minimized.

@smarterclayton

smarterclayton Sep 14, 2015

Contributor

I'm somewhat biased on load balancers because of history. Customers / user
generally expected endpoints to be the target of the load balancer. There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

Is there anything magic about the service that we really need to connect
to? Why not just have load balancers coordinate with services via
endpoints only (and keep services just as the thing that generate
endpoints). That's a looser coupling than ingress -> service. It
discourages us from depending on the service (just the endpoints) directly,
instead encouraging us to stay separate. However, it might also encourage
duplication of the same data.

On Mon, Sep 14, 2015 at 1:38 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}

+// IngressPointStatus describes the current state of an ingressPoint.
+type IngressPointStatus struct {

  • Address string
    +}

+// ServiceRef is a reference to a single service:port.
+type ServiceRef struct {

  • Name string
  • Namespace string
  • Port int64

Can you add a commend indicating which port this is - is this the named
port on the service (so that you have to lookup the service to know what
the endpoint port is) or is it the endpoint port in the endpoints list (the
output)?

It's the service port.

Right, shouldn't this be an IntOrString to support either numeric or named
ports?
Also, is Port required if the service only has one port?

no and yes. It's
https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1234.

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

I think it's way more intuitive to create a service and plug that port
into the ingress point. One shouldn't need to lookup the endpoints, the
context of this entire resource is service centric. Feel free to argue,
these are just my thoughts.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39422403.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Sep 14, 2015

Contributor

I'm somewhat biased on load balancers because of history. Customers / user
generally expected endpoints to be the target of the load balancer. There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

Is there anything magic about the service that we really need to connect
to? Why not just have load balancers coordinate with services via
endpoints only (and keep services just as the thing that generate
endpoints). That's a looser coupling than ingress -> service. It
discourages us from depending on the service (just the endpoints) directly,
instead encouraging us to stay separate. However, it might also encourage
duplication of the same data.

On Mon, Sep 14, 2015 at 1:38 PM, Prashanth B notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}

+// IngressPointStatus describes the current state of an ingressPoint.
+type IngressPointStatus struct {

  • Address string
    +}

+// ServiceRef is a reference to a single service:port.
+type ServiceRef struct {

  • Name string
  • Namespace string
  • Port int64

Can you add a commend indicating which port this is - is this the named
port on the service (so that you have to lookup the service to know what
the endpoint port is) or is it the endpoint port in the endpoints list (the
output)?

It's the service port.

Right, shouldn't this be an IntOrString to support either numeric or named
ports?
Also, is Port required if the service only has one port?

no and yes. It's
https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1234.

I think we should be explicit here - load balancers may want to only read
ingress and endpoints (and skip the service). Also load balancers may have
limitations on types of ports (doesn't support http with connection
upgrade) and so we should try to encourage an explicit desire from users.

I think it's way more intuitive to create a service and plug that port
into the ingress point. One shouldn't need to lookup the endpoints, the
context of this entire resource is service centric. Feel free to argue,
these are just my thoughts.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39422403.

Clayton Coleman | Lead Engineer, OpenShift

This comment has been minimized.

@bprashanth

bprashanth Sep 14, 2015

Member

Is there anything magic about the service that we really need to connect to?

Unfortunately now there is, the nodeport. Won't work on clouds without that (or creating a seperate static ip per service, which is costly because we're charging per ip).

Customers / user generally expected endpoints to be the target of the load balancer.

I feel like we're already teaching people to think in terms of sets with labels, and we already have a concept of sets of endpoints, the service, why revert that?

There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

What is the common case here? I feel like we should optimize for that to just work, and document the side cases. I had thought people would first create a service, play around with it, then expose it to the world through an ingresspoint. Something goes wrong you can always delete the ingresspoint, and you still have an internal service to debug.

@bprashanth

bprashanth Sep 14, 2015

Member

Is there anything magic about the service that we really need to connect to?

Unfortunately now there is, the nodeport. Won't work on clouds without that (or creating a seperate static ip per service, which is costly because we're charging per ip).

Customers / user generally expected endpoints to be the target of the load balancer.

I feel like we're already teaching people to think in terms of sets with labels, and we already have a concept of sets of endpoints, the service, why revert that?

There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

What is the common case here? I feel like we should optimize for that to just work, and document the side cases. I had thought people would first create a service, play around with it, then expose it to the world through an ingresspoint. Something goes wrong you can always delete the ingresspoint, and you still have an internal service to debug.

This comment has been minimized.

@smarterclayton

smarterclayton Sep 15, 2015

Contributor

On Sep 14, 2015, at 3:07 PM, Prashanth B notifications@github.com wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}
    +
    +// IngressPointStatus describes the current state of an ingressPoint.
    +type IngressPointStatus struct {
  • Address string
    +}
    +
    +// ServiceRef is a reference to a single service:port.
    +type ServiceRef struct {
  • Name string
  • Namespace string
  • Port int64

Is there anything magic about the service that we really need to connect to?

Unfortunately now there is, the nodeport. Won't work on clouds without that
(or creating a seperate static ip per service, which is costly because
we're charging per ip).

Customers / user generally expected endpoints to be the target of the load
balancer.

I feel like we're already teaching people to think in terms of sets with
labels, and we already have a concept of sets of endpoints, the service,
why revert that?

There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

What is the common case here? I feel like we should optimize for that to
just work, and document the side cases. I had thought people would first
create a service, play around with it, then expose it to the world through
an ingresspoint. Something goes wrong you can always delete the
ingresspoint, and you still have an internal service to debug.

I'm talking more from the ingress ports perspective. Services are nice,
but not required. And most serious load balancers will go direct to
endpoints even if you are using node ports. In fact, the load balancer
itself is the one making the decision, because it may be cluster internal
instead of external.

I'm ok with service as the indirection, I just don't think it's the happy
path once an LB is in real use. Should LBs support dynamic reallocation of
ports when the service changes? Is it up to the LB? I think ingress can
assume endpoints always, and service for now, but I could see having
endpoints decoupled from services at some point.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39433549.

@smarterclayton

smarterclayton Sep 15, 2015

Contributor

On Sep 14, 2015, at 3:07 PM, Prashanth B notifications@github.com wrote:

In docs/proposals/loadbalancing.md
#12827 (comment):

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}
    +
    +// IngressPointStatus describes the current state of an ingressPoint.
    +type IngressPointStatus struct {
  • Address string
    +}
    +
    +// ServiceRef is a reference to a single service:port.
    +type ServiceRef struct {
  • Name string
  • Namespace string
  • Port int64

Is there anything magic about the service that we really need to connect to?

Unfortunately now there is, the nodeport. Won't work on clouds without that
(or creating a seperate static ip per service, which is costly because
we're charging per ip).

Customers / user generally expected endpoints to be the target of the load
balancer.

I feel like we're already teaching people to think in terms of sets with
labels, and we already have a concept of sets of endpoints, the service,
why revert that?

There
are some weird cases - for instance, if with load balancing do I have to
add the port to my service in order to talk to an endpoint? Consider a pod
with two ports - one TLS and one HTTP. The TLS port is for internal
consumers, the HTTP ports are consumed by the load balancer because TLS is
imposed at the load balancer. Do I need to have HTTP port on the service?
Seems nice, but not truly required.

What is the common case here? I feel like we should optimize for that to
just work, and document the side cases. I had thought people would first
create a service, play around with it, then expose it to the world through
an ingresspoint. Something goes wrong you can always delete the
ingresspoint, and you still have an internal service to debug.

I'm talking more from the ingress ports perspective. Services are nice,
but not required. And most serious load balancers will go direct to
endpoints even if you are using node ports. In fact, the load balancer
itself is the one making the decision, because it may be cluster internal
instead of external.

I'm ok with service as the indirection, I just don't think it's the happy
path once an LB is in real use. Should LBs support dynamic reallocation of
ports when the service changes? Is it up to the LB? I think ingress can
assume endpoints always, and service for now, but I could see having
endpoints decoupled from services at some point.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39433549.

This comment has been minimized.

@thockin

thockin Sep 16, 2015

Member

I do think that Service is the construct here, not Pods.

@thockin

thockin Sep 16, 2015

Member

I do think that Service is the construct here, not Pods.

This comment has been minimized.

@thockin

thockin Sep 16, 2015

Member

And I do think named ports are a thingwe should keep supporting.

@thockin

thockin Sep 16, 2015

Member

And I do think named ports are a thingwe should keep supporting.

This comment has been minimized.

@smarterclayton

smarterclayton Sep 16, 2015

Contributor

Agree on named ports - it's also a convenient shorthand for conventions
around protocols which an L7 load balancer approves of (http, https, tls,
wss, etc).

On Wed, Sep 16, 2015 at 12:17 AM, Tim Hockin notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}

+// IngressPointStatus describes the current state of an ingressPoint.
+type IngressPointStatus struct {

  • Address string
    +}

+// ServiceRef is a reference to a single service:port.
+type ServiceRef struct {

  • Name string
  • Namespace string
  • Port int64

And I do think named ports are a thingwe should keep supporting.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39592540.

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton

smarterclayton Sep 16, 2015

Contributor

Agree on named ports - it's also a convenient shorthand for conventions
around protocols which an L7 load balancer approves of (http, https, tls,
wss, etc).

On Wed, Sep 16, 2015 at 12:17 AM, Tim Hockin notifications@github.com
wrote:

In docs/proposals/loadbalancing.md
#12827 (comment)
:

+// IngressPointSpec describes the ingressPoint the user wishes to exist.
+type IngressPointSpec struct {

  • Host string
  • IngressPoint map[string][]Path
    +}

+// IngressPointStatus describes the current state of an ingressPoint.
+type IngressPointStatus struct {

  • Address string
    +}

+// ServiceRef is a reference to a single service:port.
+type ServiceRef struct {

  • Name string
  • Namespace string
  • Port int64

And I do think named ports are a thingwe should keep supporting.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/12827/files#r39592540.

Clayton Coleman | Lead Engineer, OpenShift

@eparis eparis referenced this pull request in kubernetes/contrib Sep 12, 2015

Closed

githubmunger: label PRs based on certain files touched #79

+ name: barsvc
+ namespace: default
+ port: 80
+ "bar.example.com"

This comment has been minimized.

@a-robinson

a-robinson Sep 13, 2015

Member

Is this supposed to be here?

@a-robinson

a-robinson Sep 13, 2015

Member

Is this supposed to be here?

This comment has been minimized.

@bprashanth

bprashanth Sep 14, 2015

Member

yes? it's:

domain.tld:
   subdomain1
        path: svc
   subdomain2
        path: svc
   ...

Are you asking about the "exampl.com" part? Yep, that's redundant, it was brought up above and I'm just moving it into hostname and validating that a single ingresspoint has one hostname.tld.

@bprashanth

bprashanth Sep 14, 2015

Member

yes? it's:

domain.tld:
   subdomain1
        path: svc
   subdomain2
        path: svc
   ...

Are you asking about the "exampl.com" part? Yep, that's redundant, it was brought up above and I'm just moving it into hostname and validating that a single ingresspoint has one hostname.tld.

+
+```go
+// IngressPoint represents a point for ingress traffic.
+type IngressPoint struct {

This comment has been minimized.

@a-robinson

a-robinson Sep 13, 2015

Member

Just a naming comment, but shouldn't these be named IngressPoints/IngressPointsSpec/IngresPointsStatus, given that the spec includes multiple IngressPoints?

@a-robinson

a-robinson Sep 13, 2015

Member

Just a naming comment, but shouldn't these be named IngressPoints/IngressPointsSpec/IngresPointsStatus, given that the spec includes multiple IngressPoints?

This comment has been minimized.

@bprashanth

bprashanth Sep 14, 2015

Member

Hmm, they're actually the same ip+host, just different subdomains. I am running out of names that don't have more specific technical connotations though, so I need to get creative. Ideas welcome.

@bprashanth

bprashanth Sep 14, 2015

Member

Hmm, they're actually the same ip+host, just different subdomains. I am running out of names that don't have more specific technical connotations though, so I need to get creative. Ideas welcome.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Sep 13, 2015

Maybe I missed this upthread but I though we have no external maps in our API? I vaguely recall asking this question but can't find it, so apologies for being redundant if so.

Maybe I missed this upthread but I though we have no external maps in our API? I vaguely recall asking this question but can't find it, so apologies for being redundant if so.

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth Sep 14, 2015

Owner

I'm changing it to a list: kubernetes#12825 (comment)
Development is a little scattered atm because I have this broader scope proposal so we don't get myopic, and a couple of external contributors helping with the first cut pr, so thanks for being persistent about important things.

Owner

bprashanth replied Sep 14, 2015

I'm changing it to a list: kubernetes#12825 (comment)
Development is a little scattered atm because I have this broader scope proposal so we don't get myopic, and a couple of external contributors helping with the first cut pr, so thanks for being persistent about important things.

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Sep 15, 2015

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
Member

bgrant0607 commented Sep 17, 2015

Ref #561

+
+## Motivation
+
+The proposed api changes are largely motivated by the confusion surrounding kubernetes ingress and loadbalancing witnessed on [user forums](#appendix). Users tend to reason about traffic that originates outside the cluster at the application layer. The current kubernetes api lacks L7 support alltogether, and has a very opionated networking model. This leads to an impedance mismatch high enough to drive them to port brokering.

This comment has been minimized.

@bgrant0607

bgrant0607 Sep 17, 2015

Member

typo: opinionated

@bgrant0607

bgrant0607 Sep 17, 2015

Member

typo: opinionated

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jan 18, 2016

Member

Status on this?

Member

thockin commented Jan 18, 2016

Status on this?

@bprashanth

This comment has been minimized.

Show comment
Hide comment
@bprashanth

bprashanth Jan 18, 2016

Member

I think it will be easier to design this piece meal via bugs like #19497 and eventually check in an explanatory design doc. I'm leaving this open so people who search for L7 will find it, and look up all referenced issues. Let me know if anyone prefers otherwise.

Member

bprashanth commented Jan 18, 2016

I think it will be easier to design this piece meal via bugs like #19497 and eventually check in an explanatory design doc. I'm leaving this open so people who search for L7 will find it, and look up all referenced issues. Let me know if anyone prefers otherwise.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jan 19, 2016

Member

I think it's noisy to leave open :)

On Mon, Jan 18, 2016 at 10:47 AM, Prashanth B notifications@github.com
wrote:

I think it will be easier to design this piece meal via bugs like #19497
#19497 and eventually
check in an explanatory design doc. I'm leaving this open so people who
search for L7 will find it, and look up all referenced issues. Let me know
if anyone prefers otherwise.


Reply to this email directly or view it on GitHub
#12827 (comment)
.

Member

thockin commented Jan 19, 2016

I think it's noisy to leave open :)

On Mon, Jan 18, 2016 at 10:47 AM, Prashanth B notifications@github.com
wrote:

I think it will be easier to design this piece meal via bugs like #19497
#19497 and eventually
check in an explanatory design doc. I'm leaving this open so people who
search for L7 will find it, and look up all referenced issues. Let me know
if anyone prefers otherwise.


Reply to this email directly or view it on GitHub
#12827 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment