Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessing behind a reverse proxy #8053

Closed
mbaykara opened this issue Jul 13, 2023 · 25 comments · Fixed by #8735
Closed

Accessing behind a reverse proxy #8053

mbaykara opened this issue Jul 13, 2023 · 25 comments · Fixed by #8735
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@mbaykara
Copy link

mbaykara commented Jul 13, 2023

What happened?

I deployed the latest helm chart with following my-values.yaml. I want to access the dashboard via http://IP/

nginx:
  enabled: false 
cert-manager:
  enabled: false 
metrics-server:
  enabled: false 
metricsScraper:
  enabled: false

And Traefik is running in my cluster as reverse proxy.

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: kubernetes-dashboard-ingressroutes
  namespace: kubernetes-dashboard
spec:
  routes:
    - match: PathPrefix(`/`)
      kind: Rule
      services:
        - name: kubernetes-dashboard-web 
          namespace: kubernetes-dashboard
          port: 8000

I am getting White Browser screen.
Beside this, with

k port-forward svc/kubernetes-dashboard-web 8000 

getting same issue.

What did you expect to happen?

If I go to http://NODE_IP/ be able to see Kubernetes Dashboard UI

How can we reproduce it (as minimally and precisely as possible)?

Deploy with values above and try to access UI.

Anything else we need to know?

No response

What browsers are you seeing the problem on?

Chrome, Safari

Kubernetes Dashboard version

v7.0.0

Kubernetes version

1.26.0

Dev environment

No response

@mbaykara mbaykara added the kind/bug Categorizes issue or PR as related to a bug. label Jul 13, 2023
@floreks
Copy link
Member

floreks commented Jul 13, 2023

Where is the api service path mapping? UI won't work without access to the API.

@mbaykara
Copy link
Author

mbaykara commented Jul 13, 2023

I've successfully enabled the API path, and it appears to be functioning, although the implementation is quite intriguing. I'm curious about a couple of things:

Why is it necessary to create an ingress forcefully?
Why is there a requirement to expose the /api endpoint?
Why forcefully some issuer? Either forcing using selfsigned or some dummy values in order to not create resources.
The new helm chart seems to have more complexity due to the layered subdirectories. These changes have introduced more challenges rather compare the previous versions.

@floreks
Copy link
Member

floreks commented Jul 21, 2023

Why is it necessary to create an ingress forcefully?

We will update helm deployment to allow disabling ingress and configuring custom software that will take care of exposing Dashboard. Right now ingress is used in order to expose api/web containers together on a single domain, otherwise UI won't be able to access the API. We will also look into hiding the API in the future, but solving this via configuration was the easiest approach at this time.

Why is there a requirement to expose the /api endpoint?

This is a technical limitation of how web container works. We would need to get rid of our scratch base image and run a forwarding proxy inside the web container in order to hide the API. Right now UI expects that API will be exposed and accessible under the same domain on /api path.

Why forcefully some issuer? Either forcing using selfsigned or some dummy values in order to not create resources.

We are using selfsigned cert generated by cert-manager as the default. It was just easier to rely on third-party software to provide certificates that can be used to expose Dashboard. It can be also easily adapted to generate real certificates. We will add an option to disable cert-manager and provide a custom certificate. It will be up to the user to decide how to do this in the end.

The new helm chart seems to have more complexity due to the layered subdirectories. These changes have introduced more challenges rather compare the previous versions.

The complexity of the whole installation has changed due to architecture changes in the Dashboard itself. If we want to be able to scale and properly support big clusters we need to do that. In the future, we want to add support for GQL API, informers and split API itself into more dedicated services. All of that will increase the complexity even further.

@kladiv
Copy link

kladiv commented Aug 17, 2023

@mbaykara I configured like below in my environment (with a shared domain + different prefixes for microservices):

---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: kubernetes-dashboard-replacepathregex
  namespace: default
spec:
  replacePathRegex:
    regex: /k8s(/|$)(.*)
    replacement: /$2
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: kubernetes-dashboard
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`site.example.org`) && PathPrefix(`/k8s`)
      kind: Rule
      services:
        - name: kubernetes-dashboard-web
          port: 8000
      middlewares:
        - name: kubernetes-dashboard-replacepathregex
    - match: Host(`site.example.org`) && PathPrefix(`/k8s/api`)
      kind: Rule
      services:
        - name: kubernetes-dashboard-api
          port: 9000
      middlewares:
        - name: kubernetes-dashboard-replacepathregex

I guess it could work with config below too (if a dedicated domain is used):

---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: kubernetes-dashboard
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`kube.example.org`)
      kind: Rule
      services:
        - name: kubernetes-dashboard-web
          port: 8000
    - match: Host(`kube.example.org`) && PathPrefix(`/api`)
      kind: Rule
      services:
        - name: kubernetes-dashboard-api
          port: 9000

@mbaykara
Copy link
Author

@kladiv In my specific use case, I have to configure without DNS. I have done as follows:

---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: dashboard-middleware
  namespace: kubernetes-dashboard
spec:
  replacePathRegex:
    regex: "^/dashboard(/|$)(.*)"
    replacement: "/${2}"
--- 
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: kubernetes-dashboard-ingressroutes
  namespace: kubernetes-dashboard
spec:
  routes:
    - match: PathPrefix(`/`)
      kind: Rule
      middlewares:
        - name: dashboard-middleware
          namespace: kubernetes-dashboard
      services:
        - name: kubernetes-dashboard
          namespace: kubernetes-dashboard
          port: 8080

@irreleph4nt
Copy link

irreleph4nt commented Sep 16, 2023

I am not an expert for ingress, but the following is working for me using v3.0.0-alpha0 behind ingress-nginx together with a cluster issuer for certificates. I serve the dashboard on subpath /k3s. It should be rather easy to adapt this to serve the dashboard on a dedicated subdomain.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_set_header Accept-Encoding "";
        sub_filter '<base href="">' '<base href="/k3s/">';
        sub_filter_once on;
        rewrite ^(/k3s)$ \$1/ redirect;
        rewrite "(?i)/k3s(/|$)(.*)" /\$2 break;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
      - my.tld
    secretName: my.tld-cert
  rules:
  - host: my.tld
    http:
        paths:
          - path: /k3s(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: kubernetes-dashboard-web
                port:
                  number: 8000
          - path: /k3s/api(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: kubernetes-dashboard-api
                port:
                  number: 9000

@georglauterbach
Copy link

I was running into the same issue today. This is quite unexpected, and seem unwarranted to me really. It'd be awesome to have the option that lets the web service access the API inside the cluster (pod-to-pod).


For those using Traefik, this is what a minimal configuration for the Helm chart v7.0.0-alpha1 could look like (running inside the namespace kubernetes-dashboard):

---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute

metadata:
  name: kubernetes-dashboard

spec:
  entryPoints: [websecure]

  routes:
    - match: Host(`dashboard.domain.test`)
      kind: Rule

      services:
        - name: kubernetes-dashboard-web
          namespace: kubernetes-dashboard
          port: web
          scheme: http

---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute

metadata:
  name: kubernetes-dashboard-api

spec:
  entryPoints: [websecure]

  routes:
    - match: Host(`dashboard.domain.test`) && PathPrefix(`/api`)
      kind: Rule

      services:
        - name: kubernetes-dashboard-api
          namespace: kubernetes-dashboard
          port: api
          scheme: http

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@georglauterbach
Copy link

This is still a problem.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@floreks
Copy link
Member

floreks commented Mar 5, 2024

Dynamic angular app base href will be covered by #8735. It will be enough now to only rewrite target in your ingress using i.e. nginx.ingress.kubernetes.io/rewrite-target: /$2 annotation and /<path>(/|$)(.*) as the path. Configuration snippet will no longer be needed.

@georglauterbach
Copy link

georglauterbach commented Mar 6, 2024

How is this related to not requiring a dedicated ingress at all? What we need is for the traffic to stay cluster-internal, without a second ingress route required.

@floreks
Copy link
Member

floreks commented Mar 6, 2024

With the gateway in front of our containers that is configured to route requests based on request path to specific containers having a first-party support for subpath would be extremely complicated now. I think being able to avoid configuration snippets and relying on simple target rewrite in ingress is a good middle-ground.

@georglauterbach can you explain your use case? It should still be possible to solve via configuration.

@georglauterbach
Copy link

No, I think that's not what I meant, really:

It'd be awesome to have the option that lets the web service access the API inside the cluster (pod-to-pod).

Is this solved? As far as I understand, it is not - we're just talking about some rewrites for the ingress, but I want to get rid of the ingress for the API completely. We need the option for pod-to-pod communication between the web and API pod.

What just came to my mind: rewriting DNS queries to have traffic remain internal to the cluster could probably be done as well? I will see whether that works.

@floreks
Copy link
Member

floreks commented Mar 7, 2024

Ingress for API is no longer needed as of the latest release. We now use the gateway to route the traffic and it exposes the whole Dashboard as it would be a single container.

Having pod-to-pod communication heavily complicates the whole setup since we now have more than just API and Web and it would require too much effort to maintain both web and frontend compatibility with other containers.

@georglauterbach
Copy link

georglauterbach commented Mar 7, 2024

We now use the gateway to route the traffic and it exposes the whole Dashboard as it would be a single container.

Which gateway - do you mean the Gateway API?

EDIT: I see now; using Kong, I can simply point to the Kong proxy. This is nice and works quite well.

@Genmutant
Copy link

Genmutant commented Mar 7, 2024

EDIT: Works now, see below

I tried the new version with a traefik IngressRoute to the kong proxy, but I only get a 500 error and no dashboard at all anymore. Are there more configrution options I didn't set correctly? Also I'm quite new to traefik so I might have made an error in my Router.

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: kubdashboard-ingressroute
  namespace: kubernetes-dashboard
spec:
  entryPoints: [websecure]
  routes:
    - kind: Rule
      match: PathPrefix(`/kubernetes-dashboard`)
      services:
      - kind: Service
        name: kubernetes-dashboard-kong-proxy
        namespace: kubernetes-dashboard
        port: kong-proxy
      middlewares:
      - name: stripkubdashboard
        namespace: kubernetes-dashboard

---

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: stripkubdashboard
  namespace: kubernetes-dashboard
spec:
  stripPrefix:
    prefixes:
      - "/kubernetes-dashboard"
    forceSlash: true

An IngressRoute directly to the web + api service worked, but the login didn't. The new kong proxy also looks much nicer to use.

It might be because traefik is terminating the tls, while kong is still expecting tls traffic?

EDIT:
Works now after explicitly enabling the http proxy, and binding to that one:

   kong:
      proxy:
        http:
          enabled: true

@floreks
Copy link
Member

floreks commented Mar 7, 2024

Great to hear that you were able to solve it. HTTP access for Dashboard is disabled in kong by default. It is simply to avoid users trying to access Dashboard over HTTP in the end. If you are using HTTPS proxy in front of kong then it makes sense to enable it and enable HTTPS for end proxy only.

@irreleph4nt
Copy link

irreleph4nt commented Mar 7, 2024

Great to hear that you were able to solve it. HTTP access for Dashboard is disabled in kong by default. It is simply to avoid users trying to access Dashboard over HTTP in the end. If you are using HTTPS proxy in front of kong then it makes sense to enable it and enable HTTPS for end proxy only.

I am not sure I understand all of the above because I don't use traeffik. Is kong now a required component to make the dashboard work? Will I not be able to continue exposing both the /api and /web endpoints via nginx ingress?
Can you please provide a working config for nginx which will allow for hosting the dashboard in a subfolder?

@floreks
Copy link
Member

floreks commented Mar 7, 2024

There is a working example embedded in our chart. Take a look at our Makefile test target, nginx dependency and ingress configuration. It can create an ingress that can serve Dashboard under a different path than rooot (/).

Kong is simply a gateway that connects all our containers together and makes Dashboard exposed on a single service endpoint. You don't have to worry about configuring the correct routes for every container. Just configure your own ingress in front of kong service.

@irreleph4nt
Copy link

There is a working example embedded in our chart. Take a look at our Makefile test target, nginx dependency and ingress configuration. It can create an ingress that can serve Dashboard under a different path than rooot (/).

Kong is simply a gateway that connects all our containers together and makes Dashboard exposed on a single service endpoint. You don't have to worry about configuring the correct routes for every container. Just configure your own ingress in front of kong service.

I appreciate the feedback but that's too convoluted for a setup that already consists of way more than just the dashboard. What I don't see for example is a way to make your new kong thing work with an ingress that reuses an existing ssl certificate. I will probably have to look for another product now. Thanks.

@irreleph4nt
Copy link

irreleph4nt commented Mar 7, 2024

@floreks As the dashboard is an excellent product I don't want to give up on it juts because it currently can't use pre-existing secrets for ingress. Looking at your ingress manifest, would it be a lot to ask for you to include a conditional for the secretName here (Line 53) in the same way you have a conditional right above for the ingressClassName? That way we could easily set the name of an existing cert in values.yaml whilst none of the features you have built and tested need changing.

  {{- if not .Values.app.ingress.useDefaultIngressClass }}
  ingressClassName: {{ .Values.app.ingress.ingressClassName }}
  {{- end }}

@floreks
Copy link
Member

floreks commented Mar 7, 2024

Would you want to provide your own name for TLS secret? I can add a way to do that.

@irreleph4nt
Copy link

If that's all it takes for the deployment to realize an existing secret & certificate should be used for the dashboard instead of creating a new one (which would then trigger a request for a new certificate), then yes please, that's exactly what I would need. :)

@floreks
Copy link
Member

floreks commented Mar 7, 2024

https://github.com/kubernetes/dashboard/releases/tag/kubernetes-dashboard-7.1.0

You can provide your own name via app.ingress.tls.secretName

@irreleph4nt
Copy link

Thank you so much! I will test the new dashboard chart over the weekend and provide feedback in case of any issues. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants