Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IngressRoute does not honor kubernetes.io/ingress.class #720

Closed
pimvanpelt opened this issue Oct 2, 2018 · 13 comments
Closed

IngressRoute does not honor kubernetes.io/ingress.class #720

pimvanpelt opened this issue Oct 2, 2018 · 13 comments

Comments

@pimvanpelt
Copy link

What steps did you take and what happened:
Running two contour pools, one called contour-prod and the other contour-staging (each setting the appropriate flag in --ingress-class-name

Two problems that signal that the annotation is not honored:

Problem 1

Setting annotations.kubernetes.io/ingress.class: contour-staging on an IngressRoute applies this to all running contour instances, not the specific one selected.

$ kubectl -n contour get svc
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
contour-prod      LoadBalancer   10.109.26.45   194.126.235.48   80:30113/TCP,443:32570/TCP   3h42m
contour-staging   LoadBalancer   10.108.80.15   192.168.8.128    80:31054/TCP,443:30366/TCP   3h45m

$ kubectl get ingressroute --all-namespaces
NAMESPACE   NAME            FQDN          TLS SECRET   FIRST ROUTE   STATUS   STATUS DESCRIPTION
nginx       nginx-staging   k8s.ipng.nl                /             valid    valid IngressRoute

This IngressRoute is as expected, created on nginx-staging:

$ curl -I -H "Host: k8s.ipng.nl" 192.168.8.128
HTTP/1.1 200 OK
server: envoy
date: Tue, 02 Oct 2018 22:23:09 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 03 Jul 2018 13:27:08 GMT
etag: "5b3b79ac-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 5

This IngressRoute is NOT expected, it is not annotated to be created on nginx-prod:

$ curl -I -H "Host: k8s.ipng.nl" 194.126.235.48
HTTP/1.1 200 OK
server: envoy
date: Tue, 02 Oct 2018 22:24:15 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 03 Jul 2018 13:27:08 GMT
etag: "5b3b79ac-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4

Problem 2

Creating two IngressRoutes, one with contour-staging and one with contour-prod annotated, yields a conflict because both picked them up.

$ kubectl get ingressroute --all-namespaces
NAMESPACE   NAME            FQDN          TLS SECRET   FIRST ROUTE   STATUS    STATUS DESCRIPTION
nginx       nginx-prod      k8s.ipng.nl                /             invalid   fqdn "k8s.ipng.nl" is used in multiple IngressRoutes: nginx/nginx-prod, nginx/nginx-staging
nginx       nginx-staging   k8s.ipng.nl                /             invalid   fqdn "k8s.ipng.nl" is used in multiple IngressRoutes: nginx/nginx-prod, nginx/nginx-staging

What did you expect to happen:

I expect in problem 1, that the curl on the contour-prod service fails.
I expect in problem 2, that:

  • the curl on contour-prod service yields a response from nginx-prod, and that the curl on contour-staging service yields a response from nginx-staging.
  • the creation of two IngressRoutes (one for nginx-prod -> contour-prod and one for nginx-staging -> contour-staging)

Anything else you would like to add:

This problem is specific to the IngressRoute CRD. Normal Ingress objects do honor the annotation as per documentation, and I can successfully tie the nginx-prod -> contour-prod and nginx-staging -> contour-staging as per my expectations.

Environment:

  • Contour version:
    docker-pullable://gcr.io/heptio-images/contour@sha256:8ef6d4f97ad678fab75fde095ee924b83881ade5acef37c80e485a02f12e9a37

  • Kubernetes version: (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T16:55:41Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes installer & version:
    N/A.

  • Cloud provider or hardware configuration:
    Baremetal; MetalLB loadbalancer.

  • OS (e.g. from /etc/os-release):
    Ubuntu Bionic LTS.

@amoskyler
Copy link

I can confirm this issue. I was able to replicate it.

@davecheney
Copy link
Contributor

davecheney commented Oct 3, 2018 via email

@amoskyler
Copy link

Ah, the distinction between ingressroute and ingress annotations did not occur to me - makes sense.

My use case:
I use ingress class to isolate traffic for different edge load balancers. I use both an internal vpc load balancer and an external load balancer for ingressing public internet traffic. Each ingress is responsible for the routing of it's corresponding ELB.
Without the ability to distinguish which controller should handle specific routes, my public facing ELB/ingress will serve traffic to internal services. There may be a better way to handle this, but I haven't been able to come up with anything particularly viable other than running two clusters.

I currently solve for this using the Ingress resource with the ingress-class annotation.

Ideally only routes which are assigned an ingress class will only be included in the DAG of that specific controller, and ignored by all others.

@pimvanpelt
Copy link
Author

My usecase, which works with Ingress but not with IngressRoute:

  • developers have full stack replicas (environments) of their application (dev, staging, prod). There can be several of these, each in their own namespace. While the application undergoes development, folks tinker with their dev environment.
  • when we are ready to cut a release, we put it on staging environment, run QA/Loadtest/Chaos/Fuzz/Integrationtest/etc. This is intrusive and pushes the application into load / failuremodes. Once the tests pass, we mark the staging configs as ready
  • we then roll out the release to the production environment.

It is useful for our dev and staging and canary/prod environments to have their own ingress, so we have three classes:

  1. contour-dev; which exposes the dev stack
  2. contour-staging; which is the release candidate
  3. contour-prod which balances N%:100-N% across canary:prod

Specifically in the dev environment, new mappings/routes/virtualhosts are created, and the stack breaks while folks work out a golden configuration that is promoted to staging.

Specific to the staging environment, which does a loadtest, and a fuzz test, we should not interfere with production traffic.

Operator access to the prod environment is restricted to release engineering / SRE.

Using multiple ingress controllers, we can select IP addresses which are internal for dev/staging, and external for prod.

We accomplish this with 3 ingress controllers today. In moving to Contour/Envoy, I'd like to be able to carry this model forward, but I'm open to other ways in which I can have traffic isolation between stacks, and expose subsets of our services per ingress controller.

@stevesloka
Copy link
Member

@pimvanpelt so you're using the same cluster for all environments?

One way you could solve this without needing 3 controllers running is to utilize delegation. Configure Contour to only allow Root ingress routes in a specific namespace (this piece is optional but safer since it has more restrictions on who can create root IngressRoutes). Have your VHosts (root IngressRoutes) configured there for each environment and delegate them to your matching namespaces per environment.

For example:

  • dev.com --> Namespace dev
  • stage.com --> Namespace stage
  • prod.com --> Namespace prod

By doing this, any IngressRoute created within each environment can only utilize paths off of the Vhost that is delegated to them.

The only downside which may be a deal breaker for you would be that you would mix the traffic for all the environments by implementing this way.

@pimvanpelt
Copy link
Author

pimvanpelt commented Oct 4, 2018

@stevesloka Yes, we are using the same cluster.

The traffic isolation is a mild concern, which we don't have today because we're using three Ingress pools. I haven't really gotten enough experience with IngressRoute, so I'll play around with your proposal. Envoy is not the performance bottleneck for us, so my concern on traffic isolation is a nice-to-have.

It seems, that network restriction is not addressed by what you suggested, as my one envoy pool will have a public IP address, which is exposed to folks who have no business on dev/staging, unless contour can configure the envoy to accept traffic to a vhost only from a CIDR range?

@amoskyler
Copy link

amoskyler commented Oct 4, 2018

@pimvanpelt so you're using the same cluster for all environments?

One way you could solve this without needing 3 controllers running is to utilize delegation. Configure Contour to only allow Root ingress routes in a specific namespace (this piece is optional but safer since it has more restrictions on who can create root IngressRoutes). Have your VHosts (root IngressRoutes) configured there for each environment and delegate them to your matching namespaces per environment.

For example:

  • dev.com --> Namespace dev
  • stage.com --> Namespace stage
  • prod.com --> Namespace prod

By doing this, any IngressRoute created within each environment can only utilize paths off of the Vhost that is delegated to them.

The only downside which may be a deal breaker for you would be that you would mix the traffic for all the environments by implementing this way.

I would still like to push for full ingress-class separation. I do not do namespace based traffic separation. It is common for there to be some services which serve internal traffic and some which serve external traffic, even within the same namespace. This is accomplished by running two services with unique upstream load balancers & unique ingress controllers. This all occurs within the same environment.

The lack of this feature blocks my ability to roll IngressRoutes out to my infrastructure. Are there any specific reasons to avoid supporting this feature? Perhaps there is another way to implement this paradigm using existing features?

@davecheney
Copy link
Contributor

Here are a few thing that I think are possible for 0.7 which don't paint us into a corner in the future.

  1. Contour already supports the kubernetes.io/ingress.class annotation, but this is intended for running Contour in conjunction with other ingress controllers, so you can trial contour on your current cluster without betting the farm.
  2. contour serve supports a --ingress-class-name=INGRESS-CLASS-NAME flag which is intended to be used to allow contour to impersonate another ingress controller. I added this mainly to work around kube-lego's hard coded list of known ingress controllers.
  3. Using an annotation on IngressRoute resources is probably the way to go, this does feel like an annotation sort of meta data type configuration parameter.
  4. Whatever we choose we need to be mindful of the --ingressroute-root-namespaces=INGRESSROUTE-ROOT-NAMESPACES flag, I want to avoid confusing, thus hard to document and harder to internalise, interactions between those flags.

Here is what I propose

a. We add a new annotation contour.heptio.com/ingress.class which is recognised on both ingress and ingressroute objects. #739 does this.
b. We repurpose --ingress-class-name=INGRESS-CLASS-NAME to also change the name that the contour.heptio.com/ingress.class annotation matches on. #739 does this.
c. At some point in the future the --ingress-class-name=INGRESS-CLASS-NAME flag should move to a config map. This does not need to happen for 0.7

@stevesloka
Copy link
Member

Could we have a label selector on namespaces, so each instance of contour would only select resources in namespaces that match a label selector?

@davecheney
Copy link
Contributor

That's a good idea, but I'm concerned with the interaction between namespaces and labels. If we say "this contour looks for label quux across all namespaces in the cluster" then that's basically an ingress.class annotation in a cheap suit. If we let people nominate a namespace:label combination, then where do we stick that data, on the namespace object?

@stevesloka
Copy link
Member

Yup would work like services do, labels on a namespace to determine the ingress class. contour then just looks at labels when a resource is discovered. New namespaces will be added to the ingress class automatically. This then keeps the Ingressroute objects generic and non environment specific.

Just thinking out loud looking for a way to not require all the annotations.

@mauilion
Copy link

mauilion commented Oct 11, 2018

Another use case related to #410

When we have "default tls" wired in it would be helpful to consider it's usage in this case as well.

I know of a few folks that use a wildcard pattern and delegate the host side of the fqdn to teams. For example:

Attract *.app.com to the ingress controller presumably with a wildcard dns record. Set --ingress-class=app-com on the ingress controller and provide a default cert that has *.app.com as a san.

Then inform the team that when creating a new ingress resource they can follow that fqdn pattern.
Make a new ingress resource like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: app-com
  name: web
spec:
  tls:
  - hosts:
    - team1.app.com
  rules:
    - host: team1.app.com
      http:
        paths:
          - backend:
              serviceName: web
              servicePort: 80
            path: /

This enables tls for the ingress resource and since the secret is not defined the controller uses the "default" cert for this and secures it with *.app.com

I may want to service other domains this way as well.

I like @stevesloka idea about label match on namespaces but this pattern would break that. As in most cases I would want to "call out" which ingress controller is wired right to satisfy this request.

I can't wait to see what schemes we can cook up to solve this with ingressroute/contour :)
Y'all are awesome!

@davecheney
Copy link
Contributor

Closed in #739

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants