Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation on ingress #436

Closed
dsyer opened this issue May 3, 2019 · 31 comments
Closed

Documentation on ingress #436

dsyer opened this issue May 3, 2019 · 31 comments
Labels
kind/documentation Improvements or additions to documentation

Comments

@dsyer
Copy link

dsyer commented May 3, 2019

I saw some reference to ingress in the introductory blog (https://rancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/), but nothing in the README. Maybe there's another source of documentation that I somehow missed (README is good though).

I can create a LoadBalancer service in my k3s cluster, but it never gets an external IP. I don't know if that's a bug or something I did wrong.

The answer is probably DNS or something. But all I did to get the cluster working was docker-compose up, which is awesome, so it's a shame if there are hidden hoops to jump through to make ingress work.

@dsyer
Copy link
Author

dsyer commented May 3, 2019

Possibly relevant?

$ kubectl get all --namespace=kube-system 
...
NAME                      READY     UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   0/1       1            0           118m
deployment.apps/traefik   0/1       1            0           113m

@dsyer
Copy link
Author

dsyer commented May 3, 2019

Update: I got it working after I found this in the README https://github.com/rancher/k3s#service-load-balancer. It wasn't clear, however that port 80 was actually already in use (by traefik itself?). So even if I wasn't explicitly using it myself it was always doomed to fail in a service spec.

I guess I was using the wrong search term. Could there maybe be an example LoadBalancer YAML there or something? Here's an example that worked for me

kind: Service
apiVersion: v1
metadata:
  name: doubler
  labels:
    app: doubler
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 32323
    targetPort: 8080
  selector:
    app: doubler

where "doubler" is the name of the deployment I want to service requests for.

@lentzi90
Copy link

lentzi90 commented May 6, 2019

Normally you would use an Ingress for http traffic to be able to expose multiple services trough a single IP. In this case the ingress controller would be the single entrypoint to kubernetes and that is why traefik is using port 80.

If you have multiple nodes in the cluster you would still be able to create a loadbalancer service since it could use the IP of some other node. Another option is to use an alternative loadbalancer implementation such as MetalLB.

@dsyer
Copy link
Author

dsyer commented May 6, 2019

Why is Traefik using port 80 though? What does it do with it? I feel like I’m missing something (and the docs are really thin here).

@lentzi90
Copy link

lentzi90 commented May 6, 2019

Sorry for not explaining better the first time.
Traefik is the (default) ingress controller for k3s. In order to do its job it needs to scan for Ingress objects in the cluster. Once it finds one, it makes mapping from the specified hostname/path in the Ingress to the backend Service and makes sure that any incoming request going to that hostname/path ends up at the proper place.

Now Ingresses are made for HTTP and HTTPS traffic which means ports 80 and 443 respectively.
This is why traefik exposes itself on port 80 (and 443) with a Service of type LoadBalancer. It expects to make some services available on those ports. Before any services are created though, it would serve the deafult backend for any incoming requests (basically a 404 page). This backend is used as a fall through if the hostname/path doesn't match anything else.

Let's look at an example to make it a bit more concrete:

We have two services kubernetes: service-a and service-b. Now we want to expose them to the internet as service-a.example.com and service-b.example.com. You could change the type of the services to LoadBalancer and edit your DNS settings to match, but this is a waste of IP addresses, not to mention cumbersome of you have plenty of services. So we will use Ingresses instead.

With two ingresses mapping service-a.example.com -> service-a and service-b.example.com -> service-b we can reuse the same IP for both services. Leave the services as type ClusterIP. The IP address in this case would be the IP of traefik since it will do the routing. We can also use a wildcard DNS entry to resolve *.example.com to that IP and be done with it. You can now add as many services as you like without touching the DNS settings.
A request going to service-a.example.com would be sent to traefik (since this is how the DNS would be configured). Traefik would look at it and see that the hostname matches a rule provided by an Ingress in kubernetes and this rule would tell it to send the request on to service-a.

This not the only feature of Ingresses. They are also helpful for encrypting traffic with TLS and managing various settings in a unified way instead of separately for each service.

To make things confusing, not all ingress controllers work in the same way. For example, if you use GKE, you will notice that each Ingress gets its own IP. But let's keep this to k3s shall we 😄

I hope this clears things up 😄

@dsyer
Copy link
Author

dsyer commented May 8, 2019

Thanks, that's almost got me to the point of actually understanding something - and it would be really useful to add more to the k3s docs. I thought if I just curl <traefik-external-ip> I should get a 404 (doesn't work yet)? I'm also still missing how the service-a.example.com -> service-a mapping gets done. RTFM on Traefik? Maybe a tiny little YAML example in the README would help?

@lentzi90
Copy link

lentzi90 commented May 8, 2019

Ah, sorry, I keep assuming familiarity with kubernetes. I'm not sure how much of the kubernetes documentation should be included in this repo but I'll leave that decision for someone else.

Anyway, the beauty of Ingresses is that you don't need to know much about the controller as long as it is there and functioning. It doesn't matter if you use nginx, traefik or haproxy, you just specify the configuration as an Ingress object in kubernetes.

For the example with service-a and service-b it would look something like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: service-a.example.com
    http:
      paths:
      - backend:
          serviceName: service-a
          servicePort: 80
  - host: service-b.example.com
    http:
      paths:
      - backend:
          serviceName: service-a
          servicePort: 80

The above was shamelessly stolen and adapted from here.

@dsyer
Copy link
Author

dsyer commented May 8, 2019

I had something like that, except I stole it from the traefik docs. Still not working (not even a 404, just "Connection refused"). So I'm missing something, or my cluster is subtly hosed. The traefik and coredns deployments are still showing "Ready: 0/1", but I don't see any hints in the kubectl describe, so I don't know if that is broken or not. It has always been that way.

UPDATE: I deleted the cluster and the persistent volume and started from scratch. This time traefik and coredns came up ready out of the box. Not sure why it didn't the first time. All working now. Thanks for your help.

@rcarmo
Copy link

rcarmo commented May 21, 2019

I'm having inconsistent results - I have a similar YAML ingress spec that works in one cluster but not in another, so anything like an end-to-end example would be great to suss out any kinks.

@rcarmo
Copy link

rcarmo commented Oct 10, 2019

ok, so this sort of works:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red
  labels:
    app: node-red
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-red
  template:
    metadata:
      labels:
        app: node-red
    spec:
      containers:
      - name: node-red
        image: insightful/node-red:slim # my WIP custom build of Node-RED
        imagePullPolicy: Always
        env:
        - name: PUID
          value: "1000"
        - name: PGID
          value: "1000"
        - name: ADMIN_AUTH
          value: "admin:REDACTED"
        ports:
        - containerPort: 1880
        volumeMounts:
        - mountPath: /data
          name: data-volume
      volumes:
      - name: data-volume
        hostPath:
          path: /srv/state/node-red # /srv is a shared mount
          type: Directory
---
apiVersion: v1
kind: Service
metadata:
  name: node-red
spec:
  selector:
    app: node-red
  ports:
  - protocol: TCP
    name: web
    port: 80
    targetPort: 1880
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: node-red-ingress
spec:
  rules:
  - host: hostname.foobar
    http:
      paths:
      - backend:
          serviceName: node-red
          servicePort: 80

There is a fair amount of weirdness, though:

  • Port 443 is open too, with a self-signed certificate
  • I can't for the life of me figure out how to get ACME to work (even with the examples on the traefik docs)
  • The traefik dashboard port also seems to be open but 404s

Anyone got a better config?

@rcarmo
Copy link

rcarmo commented Nov 23, 2019

Circling back on this since I've finally gotten it to work on 1.0.0 on my Raspberry Pi cluster (arm32) and it will certainly be useful to people:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red
  labels:
    app: node-red
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-red
  template:
    metadata:
      labels:
        app: node-red
    spec:
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"
      containers:
      - name: node-red
        image: insightful/node-red:slim
        imagePullPolicy: Always
        env:
        - name: PUID
          value: "1000"
        - name: PGID
          value: "1000"
        ports:
        - containerPort: 1880
        volumeMounts:
        - mountPath: /data
          name: data-volume
      volumes:
      - name: data-volume
        hostPath:
          path: /srv/state/node-red
          type: Directory
---
apiVersion: v1
kind: Service
metadata:
  name: node-red
spec:
  selector:
    app: node-red
  ports:
  - protocol: TCP
    port: 80
    targetPort: 1880
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: master.lan
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: master.lan
    http:
      paths:
      - path: /
        backend:
          serviceName: node-red
          servicePort: 80
  • No ACME, though, just a self-signed certificate. Would love to get that working so that I can use k3s outside the LAN.

@arashkaffamanesh
Copy link

@lentzi90 thanks so much for the brilliant explanation, but a single IP for an ingress is a Single Point of failure (SPOF). And I think the combination of Ingress and MetalLB is the right way to avoid the SOPF, right?

@lentzi90
Copy link

@arashkaffamanesh thanks! MetalLB can indeed be used together with the ingress controller to get fail over functionality. The single point of failure problem is not necessarily solved though, since you probably just push it to the router instead.

Anyway, if you want to use MetaLB with the ingress controller, you will need to

  1. install and configure MetalLB (you will need at least one IP that MetalLB can "grab")
  2. configure the ingress controller to be exposed through a Service of type LoadBalancer (default in k3s is to expose it on host ports directly)

With that, the ingress controller will get an "external" IP from MetalLB. The Ingress would would work as normal, but if one node is having problems MetalLB can make sure traffic is not going there.

@arashkaffamanesh
Copy link

@lentzi90 thanks so much!
This great discussion encouraged me to write 2 posts about it and I hope one could find it useful:
https://blog.kubernauts.io/k3s-with-metallb-on-multipass-vms-ac2b37298589
https://blog.kubernauts.io/k3s-with-k3d-and-metallb-on-mac-923a3255c36e

I think in the meanwhile MetalLB with a BGP router might be the right solution to address the SPOF issue.

@brianpclab
Copy link

I securely access my cluster from outside my home network using

All of the information to do this is on the internet but brief summary:

  • DDNS between my Google domain and the ASUS router client ensures that the domain is aware of my routers WAN IP address.
  • ASUS router LAN address range 192.168.0.0/24.
  • TRENDnet WAN connected to ASUS LAN port and TRENDnet configured with static WAN IP address.
  • TRENDnet LAN address range 10.1.0.0/24.
  • ASUS router DMZ and port forwarding 80 and 443 to TRENDNet static IP address.
  • Setup PI using latest Raspbian, static IP in 10.1.0.0/24 range, connect to TRENDNet, install Nginx.
  • TRENDnet DMZ set to PI static IP.
  • Configure Nginx on PI and setup Lets encrypt.
  • Congure Nginx to forward HTTP to HTTPS and reverse proxy to traefix LoadBalancer service IP address port 80.
  • Use Ingress on K3S to route requests to your services.

My master and nodes (All Raspberry PI's) are on the TRENDNet but I have a NFS share on my WD PR4100 NAS in the 92.168.0.0/24 subnet. This let me setup NFS PersistantVolumes and PersistantVolumeClaims that are used by my PODS.

You will need a computer on the TRENDNet LAN for kubectl but this setup is secure and works great for me.

@marcfiu
Copy link

marcfiu commented Jan 24, 2020

@rcarmo Thank you for taking the time to circle back. Your contribution was helpful.

I am also trying to get 443 to work via traefik, particularly to a self-signed backend. I managed to get it "work" by:

  1. amending traefik's deployment object to include the "--insecureSkipVerify=true " argument
  2. changing the servicePort from 80 to 443
  3. leveraging traefik treating incoming 443 traffic with the same rules as for port 80 traffic

I put "work" into quotes, as #2 and #3 are just wrong in general. Maybe this can be made to work with the proper ingress object definition; however, I am wondering whether it is simply a limitation with the 1.7.9 version traefik I'm using with the vanilla k3s setup.

@rcarmo
Copy link

rcarmo commented Jan 24, 2020

Thanks! You’ve gone further than I did.

I would really like that k3s was fully “batteries included” in this regard. Of course I can use other ingress controllers, but that definitely not the point :)

@lentzi90
Copy link

@marcfiu what you are diving into is (to me at least) a completely different matter than ingress documentation. You are talking about encrypting the traffic inside the cluster between pods. I'd say that it should be a separate issue.

A general solution, if you want end to end encryption inside the cluster, is to use a service mesh such as Istio. Otherwise you will need to make sure all pods trust each others certificates.This is the problem you have: traefik does not trust the self-signed certificate that you are using so it is refusing to accept it unless to tell it to skip the verification. You could make the CA available to traefik and tell it to trust certificates signed by it instead.

I do not understand why you are saying that changing the port from 80 to 443 is "wrong in general". I say the opposite. If you are encrypting the traffic, you are not using HTTP which would normally use port 80, but rather HTTPS which uses port 443. So you should indeed change to port to 443 or all applications will be confused.

@marcfiu
Copy link

marcfiu commented Jan 27, 2020

@lentzi90: indeed I was conflating two separate issues and completely agree with you on how encrypted traffic inside the cluster between pods should be handled.

What I want to do is set up an ingress object where port 80 traffic to traefik would go to servicePort 80 on the backend, while port 443 traffic to traefik would go to servicePort 443 on the backend. I create the ingress object is shown below, for which both port 80 & 443 traffic goes to servicePort 443. Maybe it boils down to my missing something simple in the annotations!!?!

Here's what I tried to do:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test1
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: test1.lab.local
    http:
      paths:
      - path: /
        backend:
          serviceName: servicea
          servicePort: 443

Let's assume for the moment that there are proper certs (as I am cheating by using traefik's --insecureSkipVerify=true arg).

@brandond
Copy link
Contributor

brandond commented Jan 27, 2020

My understanding is that for two sets of port mappings (two routes, essentially) you need two ingresses. I'm pretty new to traefik, but that's how I've been doing it with Traefik 2. I am having a hard time figuring out from the docs how to do the same thing with 1.7 though.

@marcfiu
Copy link

marcfiu commented Jan 28, 2020

@brandond Thanks for the feedback. I got to this issue thread as it was called "Documentation on ingress" with the hope of learning the right ingress definitions to do port 80 & 443. I tried this and that, which ended up not working and led me to the guess that it might be a limitation/bug with traefik 1.7. Your comment seems to confirm that traefik 1.7 may not let one do this. However, it would be useful to know how to define the relevant ingress objects regardless of it not working traefik 1.7.

@lentzi90
Copy link

@marcfiu you cannot do this with a singe ingress, as @brandond mentioned. It is not a limitation in traefik as much as in the ingress object itself.
Unfortunately I don't think it is even possible to do with two ingresses, since the two objects would result in conflicting configurations. You cannot disable port 80 for one and have it open for the other as far as I know. How would traefik know which one the incoming request was for? Should it block the request or accept it and send it to port 80?

Say you have something like this:

Ingress1: http://foo.bar.com -> foo-service:80
Ingress2: https://foo.bar.com -> foo-service:443

The problem is that you cannot create an ingress for https only. You can add the tls configuration to an http ingress, and then maybe add an annotation to redirect 80 -> 443. But then you never get to port 80 on the service, it would be a direct conflict with the other ingress!

For these situations I would say that you should use a Service with type=LoadBalancer instead. Then you get a separate IP and can use both ports directly. If you want to use an ingress, use a single port (either 80 or 443, not both). The ingress can take care of terminating tls and/or redirecting from one port to the other.

@brandond
Copy link
Contributor

With Traefik 2 you can specify in the ingress configuration what entrypoint to apply the route to. I have one ingress connected to the web (http) entrypoint that routes traffic to port 80, and another ingress connected to the websecure (https) entrypoint that routes traffic to port 443. It sure looks like Traefik 1 does not let you restrict ingress configurations to a specific entrypoint, which is I think what you want to do.

@marcfiu
Copy link

marcfiu commented Jan 28, 2020

@brandond Glad to hear you got this working with Traefik 2, as this type of thing is something one can certainly do with Nginx. When you mention "ingress configuration" are you referring to how you configured it directly in Traefik 2 or via a Kubernetes ingress object? If the latter, mind sharing a sample of what you have working.

@brandond
Copy link
Contributor

brandond commented Jan 28, 2020

Traefik 2 is started with the following entrypoint configuration:

          - "--entryPoints.web.address=:80"
          - "--entryPoints.websecure.address=:443"
          - "--entryPoints.traefik.address=:9000"
          - "--entryPoints.metrics.address=:8082"

I then have the following resources:

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: my-app-web
  namespace: service-ns
  labels:
    app: my-app
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`my.host.name`)
      kind: Rule
      services:
        - kind: Service
          name: my-service
          namespace: service-ns
          passHostHeader: true
          port: 80
          scheme: http
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: my-app-websecure
  namespace: service-ns
  labels:
    app: my-app
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`my.host.name`)
      kind: Rule
      services:
        - kind: Service
          name: my-service
          namespace: service-ns
          passHostHeader: true
          port: 443
          scheme: https
  tls:
    certResolver: le

@malikbenkirane
Copy link

malikbenkirane commented May 23, 2020

@brandond thank you for this example but I see from my "out of the box" k3s that the traefik container running is actually rancher/library-traefik:1.7.19. Would you please mind sharing with us how you did safely upgraded to 2.2.

@brandond
Copy link
Contributor

I didn't really upgrade, I started the cluster with --disable traefik and then deployed the Traefik 2.2 helm chart.

@malikbenkirane
Copy link

malikbenkirane commented May 24, 2020

Also according to this (though, I need to double check this because I didn't looked for options parsers in the script yet)
https://github.com/rancher/k3s/blob/a013f7dda57180249220ea3da9dfecd9a1c0fdde/scripts/download#L21
it looks like traefik is already deployed with a helm chart. In case the cluster wasn't started with --disable traefik it's then possible to run

helm uninstall -n kube-system traefik

and the deploy Traefik 2.2 rather than
https://github.com/rancher/k3s/blob/a013f7dda57180249220ea3da9dfecd9a1c0fdde/scripts/download#L8

Further notes and instructions to deploy Traefik 2 I found were here

https://docs.traefik.io/v2.0/getting-started/install-traefik/

@threedliteguy
Copy link

threedliteguy commented Jun 9, 2020

I am using traefik with k3s here:
traefik LoadBalancer 10.43.28.97 10.0.0.105 80:32757/TCP,443:31149/TCP 8d
and while setting up the Longhorn ui ingress per their instructions I ran into:

kubectl describe ingress longhorn-ingress -n longhorn-system
...
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)

while curl shows a page with:

<script src="https://as.alipayobjects.com/g/component/??console-polyfill/0.2.2/index.js,media-match/2.0.2/media.match.min.js"></script>

This seems to be a security issue

@ShylajaDevadiga
Copy link
Contributor

@dsyer This is expected behavior for a LoadBalancer service to not get ExternalIP unless --node-external-ip is passed while creating the cluster. You can see it in the lines below.
https://github.com/rancher/k3s/blob/master/pkg/servicelb/controller.go#L227-L239

@dereknola
Copy link
Contributor

This is is quite old. K3s has shipped with Traefik v2 for over 2 years now and we have a section in the docs on using the Loadbalancer https://docs.k3s.io/networking#service-load-balancer. Closing as OOD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Improvements or additions to documentation
Projects
Status: Closed
Archived in project
Development

No branches or pull requests