-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing behind a reverse proxy #8053
Comments
Where is the api service path mapping? UI won't work without access to the API. |
I've successfully enabled the API path, and it appears to be functioning, although the implementation is quite intriguing. I'm curious about a couple of things: Why is it necessary to create an ingress forcefully? |
We will update helm deployment to allow disabling ingress and configuring custom software that will take care of exposing Dashboard. Right now ingress is used in order to expose api/web containers together on a single domain, otherwise UI won't be able to access the API. We will also look into hiding the API in the future, but solving this via configuration was the easiest approach at this time.
This is a technical limitation of how web container works. We would need to get rid of our scratch base image and run a forwarding proxy inside the web container in order to hide the API. Right now UI expects that API will be exposed and accessible under the same domain on
We are using selfsigned cert generated by cert-manager as the default. It was just easier to rely on third-party software to provide certificates that can be used to expose Dashboard. It can be also easily adapted to generate real certificates. We will add an option to disable cert-manager and provide a custom certificate. It will be up to the user to decide how to do this in the end.
The complexity of the whole installation has changed due to architecture changes in the Dashboard itself. If we want to be able to scale and properly support big clusters we need to do that. In the future, we want to add support for GQL API, informers and split API itself into more dedicated services. All of that will increase the complexity even further. |
@mbaykara I configured like below in my environment (with a shared domain + different prefixes for microservices): ---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: kubernetes-dashboard-replacepathregex
namespace: default
spec:
replacePathRegex:
regex: /k8s(/|$)(.*)
replacement: /$2
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`site.example.org`) && PathPrefix(`/k8s`)
kind: Rule
services:
- name: kubernetes-dashboard-web
port: 8000
middlewares:
- name: kubernetes-dashboard-replacepathregex
- match: Host(`site.example.org`) && PathPrefix(`/k8s/api`)
kind: Rule
services:
- name: kubernetes-dashboard-api
port: 9000
middlewares:
- name: kubernetes-dashboard-replacepathregex I guess it could work with config below too (if a dedicated domain is used): ---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`kube.example.org`)
kind: Rule
services:
- name: kubernetes-dashboard-web
port: 8000
- match: Host(`kube.example.org`) && PathPrefix(`/api`)
kind: Rule
services:
- name: kubernetes-dashboard-api
port: 9000 |
@kladiv In my specific use case, I have to configure without DNS. I have done as follows: ---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: dashboard-middleware
namespace: kubernetes-dashboard
spec:
replacePathRegex:
regex: "^/dashboard(/|$)(.*)"
replacement: "/${2}"
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard-ingressroutes
namespace: kubernetes-dashboard
spec:
routes:
- match: PathPrefix(`/`)
kind: Rule
middlewares:
- name: dashboard-middleware
namespace: kubernetes-dashboard
services:
- name: kubernetes-dashboard
namespace: kubernetes-dashboard
port: 8080 |
I am not an expert for ingress, but the following is working for me using ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Accept-Encoding "";
sub_filter '<base href="">' '<base href="/k3s/">';
sub_filter_once on;
rewrite ^(/k3s)$ \$1/ redirect;
rewrite "(?i)/k3s(/|$)(.*)" /\$2 break;
spec:
ingressClassName: nginx
tls:
- hosts:
- my.tld
secretName: my.tld-cert
rules:
- host: my.tld
http:
paths:
- path: /k3s(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: kubernetes-dashboard-web
port:
number: 8000
- path: /k3s/api(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: kubernetes-dashboard-api
port:
number: 9000 |
I was running into the same issue today. This is quite unexpected, and seem unwarranted to me really. It'd be awesome to have the option that lets the web service access the API inside the cluster (pod-to-pod). For those using Traefik, this is what a minimal configuration for the Helm chart ---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
spec:
entryPoints: [websecure]
routes:
- match: Host(`dashboard.domain.test`)
kind: Rule
services:
- name: kubernetes-dashboard-web
namespace: kubernetes-dashboard
port: web
scheme: http
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard-api
spec:
entryPoints: [websecure]
routes:
- match: Host(`dashboard.domain.test`) && PathPrefix(`/api`)
kind: Rule
services:
- name: kubernetes-dashboard-api
namespace: kubernetes-dashboard
port: api
scheme: http
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This is still a problem. /remove-lifecycle stale |
Dynamic angular app base href will be covered by #8735. It will be enough now to only rewrite target in your ingress using i.e. |
How is this related to not requiring a dedicated ingress at all? What we need is for the traffic to stay cluster-internal, without a second ingress route required. |
With the gateway in front of our containers that is configured to route requests based on request path to specific containers having a first-party support for subpath would be extremely complicated now. I think being able to avoid configuration snippets and relying on simple target rewrite in ingress is a good middle-ground. @georglauterbach can you explain your use case? It should still be possible to solve via configuration. |
No, I think that's not what I meant, really:
Is this solved? As far as I understand, it is not - we're just talking about some rewrites for the ingress, but I want to get rid of the ingress for the API completely. We need the option for pod-to-pod communication between the web and API pod. What just came to my mind: rewriting DNS queries to have traffic remain internal to the cluster could probably be done as well? I will see whether that works. |
Ingress for API is no longer needed as of the latest release. We now use the gateway to route the traffic and it exposes the whole Dashboard as it would be a single container. Having pod-to-pod communication heavily complicates the whole setup since we now have more than just API and Web and it would require too much effort to maintain both web and frontend compatibility with other containers. |
EDIT: I see now; using Kong, I can simply point to the Kong proxy. This is nice and works quite well. |
EDIT: Works now, see below I tried the new version with a traefik IngressRoute to the kong proxy, but I only get a 500 error and no dashboard at all anymore. Are there more configrution options I didn't set correctly? Also I'm quite new to traefik so I might have made an error in my Router. apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kubdashboard-ingressroute
namespace: kubernetes-dashboard
spec:
entryPoints: [websecure]
routes:
- kind: Rule
match: PathPrefix(`/kubernetes-dashboard`)
services:
- kind: Service
name: kubernetes-dashboard-kong-proxy
namespace: kubernetes-dashboard
port: kong-proxy
middlewares:
- name: stripkubdashboard
namespace: kubernetes-dashboard
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: stripkubdashboard
namespace: kubernetes-dashboard
spec:
stripPrefix:
prefixes:
- "/kubernetes-dashboard"
forceSlash: true An IngressRoute directly to the web + api service worked, but the login didn't. The new kong proxy also looks much nicer to use. It might be because traefik is terminating the tls, while kong is still expecting tls traffic? EDIT: kong:
proxy:
http:
enabled: true |
Great to hear that you were able to solve it. HTTP access for Dashboard is disabled in kong by default. It is simply to avoid users trying to access Dashboard over HTTP in the end. If you are using HTTPS proxy in front of kong then it makes sense to enable it and enable HTTPS for end proxy only. |
I am not sure I understand all of the above because I don't use traeffik. Is kong now a required component to make the dashboard work? Will I not be able to continue exposing both the /api and /web endpoints via nginx ingress? |
There is a working example embedded in our chart. Take a look at our Makefile test target, nginx dependency and ingress configuration. It can create an ingress that can serve Dashboard under a different path than rooot ( Kong is simply a gateway that connects all our containers together and makes Dashboard exposed on a single service endpoint. You don't have to worry about configuring the correct routes for every container. Just configure your own ingress in front of kong service. |
I appreciate the feedback but that's too convoluted for a setup that already consists of way more than just the dashboard. What I don't see for example is a way to make your new kong thing work with an ingress that reuses an existing ssl certificate. I will probably have to look for another product now. Thanks. |
@floreks As the dashboard is an excellent product I don't want to give up on it juts because it currently can't use pre-existing secrets for ingress. Looking at your ingress manifest, would it be a lot to ask for you to include a conditional for the secretName here (Line 53) in the same way you have a conditional right above for the ingressClassName? That way we could easily set the name of an existing cert in values.yaml whilst none of the features you have built and tested need changing.
|
Would you want to provide your own name for TLS secret? I can add a way to do that. |
If that's all it takes for the deployment to realize an existing secret & certificate should be used for the dashboard instead of creating a new one (which would then trigger a request for a new certificate), then yes please, that's exactly what I would need. :) |
https://github.com/kubernetes/dashboard/releases/tag/kubernetes-dashboard-7.1.0 You can provide your own name via |
Thank you so much! I will test the new dashboard chart over the weekend and provide feedback in case of any issues. :) |
What happened?
I deployed the latest helm chart with following my-values.yaml. I want to access the dashboard via
http://IP/
And Traefik is running in my cluster as reverse proxy.
I am getting White Browser screen.
Beside this, with
getting same issue.
What did you expect to happen?
If I go to http://NODE_IP/ be able to see Kubernetes Dashboard UI
How can we reproduce it (as minimally and precisely as possible)?
Deploy with values above and try to access UI.
Anything else we need to know?
No response
What browsers are you seeing the problem on?
Chrome, Safari
Kubernetes Dashboard version
v7.0.0
Kubernetes version
1.26.0
Dev environment
No response
The text was updated successfully, but these errors were encountered: