-
Notifications
You must be signed in to change notification settings - Fork 2.3k
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation on ingress #436
Comments
Possibly relevant?
|
Update: I got it working after I found this in the README https://github.com/rancher/k3s#service-load-balancer. It wasn't clear, however that port 80 was actually already in use (by traefik itself?). So even if I wasn't explicitly using it myself it was always doomed to fail in a service spec. I guess I was using the wrong search term. Could there maybe be an example
where "doubler" is the name of the deployment I want to service requests for. |
Normally you would use an Ingress for http traffic to be able to expose multiple services trough a single IP. In this case the ingress controller would be the single entrypoint to kubernetes and that is why traefik is using port 80. If you have multiple nodes in the cluster you would still be able to create a loadbalancer service since it could use the IP of some other node. Another option is to use an alternative loadbalancer implementation such as MetalLB. |
Why is Traefik using port 80 though? What does it do with it? I feel like I’m missing something (and the docs are really thin here). |
Sorry for not explaining better the first time. Now Ingresses are made for HTTP and HTTPS traffic which means ports 80 and 443 respectively. Let's look at an example to make it a bit more concrete: We have two services kubernetes: With two ingresses mapping This not the only feature of Ingresses. They are also helpful for encrypting traffic with TLS and managing various settings in a unified way instead of separately for each service. To make things confusing, not all ingress controllers work in the same way. For example, if you use GKE, you will notice that each Ingress gets its own IP. But let's keep this to k3s shall we 😄 I hope this clears things up 😄 |
Thanks, that's almost got me to the point of actually understanding something - and it would be really useful to add more to the k3s docs. I thought if I just |
Ah, sorry, I keep assuming familiarity with kubernetes. I'm not sure how much of the kubernetes documentation should be included in this repo but I'll leave that decision for someone else. Anyway, the beauty of Ingresses is that you don't need to know much about the controller as long as it is there and functioning. It doesn't matter if you use nginx, traefik or haproxy, you just specify the configuration as an Ingress object in kubernetes. For the example with service-a and service-b it would look something like this:
The above was shamelessly stolen and adapted from here. |
I had something like that, except I stole it from the traefik docs. Still not working (not even a 404, just "Connection refused"). So I'm missing something, or my cluster is subtly hosed. The traefik and coredns deployments are still showing "Ready: 0/1", but I don't see any hints in the UPDATE: I deleted the cluster and the persistent volume and started from scratch. This time traefik and coredns came up ready out of the box. Not sure why it didn't the first time. All working now. Thanks for your help. |
I'm having inconsistent results - I have a similar YAML ingress spec that works in one cluster but not in another, so anything like an end-to-end example would be great to suss out any kinks. |
ok, so this sort of works: ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-red
labels:
app: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: insightful/node-red:slim # my WIP custom build of Node-RED
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: ADMIN_AUTH
value: "admin:REDACTED"
ports:
- containerPort: 1880
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /srv/state/node-red # /srv is a shared mount
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: node-red
spec:
selector:
app: node-red
ports:
- protocol: TCP
name: web
port: 80
targetPort: 1880
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: node-red-ingress
spec:
rules:
- host: hostname.foobar
http:
paths:
- backend:
serviceName: node-red
servicePort: 80 There is a fair amount of weirdness, though:
Anyone got a better config? |
Circling back on this since I've finally gotten it to work on 1.0.0 on my Raspberry Pi cluster (arm32) and it will certainly be useful to people: ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-red
labels:
app: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: "true"
effect: "NoSchedule"
containers:
- name: node-red
image: insightful/node-red:slim
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
ports:
- containerPort: 1880
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /srv/state/node-red
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: node-red
spec:
selector:
app: node-red
ports:
- protocol: TCP
port: 80
targetPort: 1880
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master.lan
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: master.lan
http:
paths:
- path: /
backend:
serviceName: node-red
servicePort: 80
|
@lentzi90 thanks so much for the brilliant explanation, but a single IP for an ingress is a Single Point of failure (SPOF). And I think the combination of Ingress and MetalLB is the right way to avoid the SOPF, right? |
@arashkaffamanesh thanks! MetalLB can indeed be used together with the ingress controller to get fail over functionality. The single point of failure problem is not necessarily solved though, since you probably just push it to the router instead. Anyway, if you want to use MetaLB with the ingress controller, you will need to
With that, the ingress controller will get an "external" IP from MetalLB. The Ingress would would work as normal, but if one node is having problems MetalLB can make sure traffic is not going there. |
@lentzi90 thanks so much! I think in the meanwhile MetalLB with a BGP router might be the right solution to address the SPOF issue. |
I securely access my cluster from outside my home network using
All of the information to do this is on the internet but brief summary:
My master and nodes (All Raspberry PI's) are on the TRENDNet but I have a NFS share on my WD PR4100 NAS in the 92.168.0.0/24 subnet. This let me setup NFS PersistantVolumes and PersistantVolumeClaims that are used by my PODS. You will need a computer on the TRENDNet LAN for kubectl but this setup is secure and works great for me. |
@rcarmo Thank you for taking the time to circle back. Your contribution was helpful. I am also trying to get 443 to work via traefik, particularly to a self-signed backend. I managed to get it "work" by:
I put "work" into quotes, as #2 and #3 are just wrong in general. Maybe this can be made to work with the proper ingress object definition; however, I am wondering whether it is simply a limitation with the 1.7.9 version traefik I'm using with the vanilla k3s setup. |
Thanks! You’ve gone further than I did. I would really like that k3s was fully “batteries included” in this regard. Of course I can use other ingress controllers, but that definitely not the point :) |
@marcfiu what you are diving into is (to me at least) a completely different matter than ingress documentation. You are talking about encrypting the traffic inside the cluster between pods. I'd say that it should be a separate issue. A general solution, if you want end to end encryption inside the cluster, is to use a service mesh such as Istio. Otherwise you will need to make sure all pods trust each others certificates.This is the problem you have: traefik does not trust the self-signed certificate that you are using so it is refusing to accept it unless to tell it to skip the verification. You could make the CA available to traefik and tell it to trust certificates signed by it instead. I do not understand why you are saying that changing the port from 80 to 443 is "wrong in general". I say the opposite. If you are encrypting the traffic, you are not using HTTP which would normally use port 80, but rather HTTPS which uses port 443. So you should indeed change to port to 443 or all applications will be confused. |
@lentzi90: indeed I was conflating two separate issues and completely agree with you on how encrypted traffic inside the cluster between pods should be handled. What I want to do is set up an ingress object where port 80 traffic to traefik would go to servicePort 80 on the backend, while port 443 traffic to traefik would go to servicePort 443 on the backend. I create the ingress object is shown below, for which both port 80 & 443 traffic goes to servicePort 443. Maybe it boils down to my missing something simple in the annotations!!?! Here's what I tried to do:
Let's assume for the moment that there are proper certs (as I am cheating by using traefik's |
My understanding is that for two sets of port mappings (two routes, essentially) you need two ingresses. I'm pretty new to traefik, but that's how I've been doing it with Traefik 2. I am having a hard time figuring out from the docs how to do the same thing with 1.7 though. |
@brandond Thanks for the feedback. I got to this issue thread as it was called "Documentation on ingress" with the hope of learning the right ingress definitions to do port 80 & 443. I tried this and that, which ended up not working and led me to the guess that it might be a limitation/bug with traefik 1.7. Your comment seems to confirm that traefik 1.7 may not let one do this. However, it would be useful to know how to define the relevant ingress objects regardless of it not working traefik 1.7. |
@marcfiu you cannot do this with a singe ingress, as @brandond mentioned. It is not a limitation in traefik as much as in the ingress object itself. Say you have something like this:
The problem is that you cannot create an ingress for https only. You can add the tls configuration to an http ingress, and then maybe add an annotation to redirect 80 -> 443. But then you never get to port 80 on the service, it would be a direct conflict with the other ingress! For these situations I would say that you should use a Service with |
With Traefik 2 you can specify in the ingress configuration what entrypoint to apply the route to. I have one ingress connected to the web (http) entrypoint that routes traffic to port 80, and another ingress connected to the websecure (https) entrypoint that routes traffic to port 443. It sure looks like Traefik 1 does not let you restrict ingress configurations to a specific entrypoint, which is I think what you want to do. |
@brandond Glad to hear you got this working with Traefik 2, as this type of thing is something one can certainly do with Nginx. When you mention "ingress configuration" are you referring to how you configured it directly in Traefik 2 or via a Kubernetes ingress object? If the latter, mind sharing a sample of what you have working. |
Traefik 2 is started with the following entrypoint configuration:
I then have the following resources: ---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-app-web
namespace: service-ns
labels:
app: my-app
spec:
entryPoints:
- web
routes:
- match: Host(`my.host.name`)
kind: Rule
services:
- kind: Service
name: my-service
namespace: service-ns
passHostHeader: true
port: 80
scheme: http
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-app-websecure
namespace: service-ns
labels:
app: my-app
spec:
entryPoints:
- websecure
routes:
- match: Host(`my.host.name`)
kind: Rule
services:
- kind: Service
name: my-service
namespace: service-ns
passHostHeader: true
port: 443
scheme: https
tls:
certResolver: le |
@brandond thank you for this example but I see from my "out of the box" k3s that the traefik container running is actually |
I didn't really upgrade, I started the cluster with |
Also according to this (though, I need to double check this because I didn't looked for options parsers in the script yet)
and the deploy Traefik 2.2 rather than Further notes and instructions to deploy Traefik 2 I found were here
|
I am using traefik with k3s here: kubectl describe ingress longhorn-ingress -n longhorn-system while curl shows a page with: <script src="https://as.alipayobjects.com/g/component/??console-polyfill/0.2.2/index.js,media-match/2.0.2/media.match.min.js"></script>This seems to be a security issue |
@dsyer This is expected behavior for a LoadBalancer service to not get ExternalIP unless --node-external-ip is passed while creating the cluster. You can see it in the lines below. |
This is is quite old. K3s has shipped with Traefik v2 for over 2 years now and we have a section in the docs on using the Loadbalancer https://docs.k3s.io/networking#service-load-balancer. Closing as OOD. |
I saw some reference to ingress in the introductory blog (https://rancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/), but nothing in the README. Maybe there's another source of documentation that I somehow missed (README is good though).
I can create a
LoadBalancer
service in my k3s cluster, but it never gets an external IP. I don't know if that's a bug or something I did wrong.The answer is probably DNS or something. But all I did to get the cluster working was
docker-compose up
, which is awesome, so it's a shame if there are hidden hoops to jump through to make ingress work.The text was updated successfully, but these errors were encountered: