-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ui updates seem to be too slow #8835
Comments
Could you try disabling metrics and checking if it improves anything? Pass |
@floreks ah that did the trick! now it pretty fast! |
I will have to investigate that at some point. There were no real changes to metrics gathering. Maybe there is an issue with metrics server responsiveness. |
hummm I do not have metrics-server installed in my kind cluster. without metrics-server is this expected to be slow? if so, maybe the FAQ should make it more explicit? the chart values.yaml comments seem to be more explicit. maybe put that in the faq? |
👋 I'm experiencing a similar issue after upgrading from a much earlier version. I added: api:
containers:
args:
- --metrics-provider=none and things are substantially better on most pages. However, some pages still struggle to load quickly (especially the Workloads page) and I have less than 150 pods. I'm running k3s with the builtin Some request timings: edit: Eventually, things got super slow again after I clicked around a bunch. Then, I restarted the |
If you start clicking too much and spamming API server with requests throttling will kick in and significantly slow down your responses. Restarting API server can 'reset' throttling and it will work faster. Normal use should be ok. |
Hmm, I'm still a bit surprised that I can cause throttling by human-scale-clicking around. To be clear, I wasn't trying to stress the system, just view different panels in the UI :) Here's how the requests from There aren't any timeouts but this is still really slow, right? |
That is definitely unexpected. What device are you using for your k3s installation? |
4 cores of an AMD EPYC 7371. Some quick benchmarks:
So I guess my timings in the UI aren't that much slower if it's calling the equivalent of |
We also can't directly compare kubectl to the UI as we have to make more calls than kubectl to get some extra information and apply additional logic such as server side pagination, sorting, filtering. It will always be slower. |
Yep, that makes sense. FWIW I jumped from |
@sushain97 I have been further debugging the performance issue and pinned it down exactly. Add I honestly have no idea what is causing in-cluster service proxy to be super slow compared to accessing metrics scraper with HTTP client through service proxy directly. I don't see anything that changed there recently. |
Hm, it doesn't feel too different to me: Here's what I have: apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: kubernetes-dashboard
namespace: kube-system
spec:
repo: https://kubernetes.github.io/dashboard/
chart: kubernetes-dashboard
targetNamespace: kubernetes-dashboard
version: 7.2.0
valuesContent: |-
app:
scheduling:
nodeSelector:
kubernetes.io/hostname: kube.local.skc.name
# https://github.com/kubernetes/dashboard/issues/8835
api:
containers:
args:
- --metrics-provider=none
- --sidecar-host=kubernetes-dashboard-metrics-scraper.kubernetes-dashboard
kong:
proxy:
http:
enabled: true |
I encountered a similar thing once I upgraded to the newer versions of kubernetes-dashboard (lots of requests timing out). API server logs showed this, client-side throttling in effect:
Setting
...but that wasn't the first thing that I tried because I wanted to keep metrics. What I found was that in dashboard/modules/common/client/init.go Line 48 in 567a38f
dashboard/modules/common/client/init.go Line 64 in 567a38f
buildBaseConfig needs to fetch its config from whatever source it can, but then also apply its default settings on top of that, specifically the queries per second limit.
Below is the compare of what I ended up using for my own use case, but I feel like I could clean it up as far as pointer usage, happy for any advice. |
You could actually reenable metrics with that sidecar host change. If that doesn't help then it might be your machine. When I was testing locally on my kind cluster response times went down from 1-3 seconds to 100ms on average for every view with all namespaces selected. |
Ye, I have pinned it down to in-cluster client too, but I actually ended up using fake rate limiter as i.e. internal rest client derived from client was also overriding some configuration for me. I will create a PR with a bunch of changes including this fix a bit later today. Thanks for your help anyway! |
What happened?
updating any resource takes too long (> 1s) which is substantial higher than the apparently equivalent kubectl command.
What did you expect to happen?
Expected to see deployments displayed in roughly the same amount of time as
kubectl get deployments -A
.How can we reproduce it (as minimally and precisely as possible)?
Observe the time taken with kubectl, 0.068s:
Getting, and displaying the entire yaml, 0.140s:
Observe the time taken with the browser, 1.2s:
Anything else we need to know?
This was tested in a kind cluster, with traefik ingress controller, sending data to kong using http (without tls), and lifting all resource limits (also note that modifying the api replicas does not seem to make much difference):
The entire ansible playbook is at:
https://github.com/rgl/my-ubuntu-ansible-playbooks/tree/upgrade-kubernetes-dashboard
Have a look a the last commit in that branch to see just the kubernetes-dashboard changes.
What browsers are you seeing the problem on?
No response
Kubernetes Dashboard version
7.1.2
Kubernetes version
1.29.2
Dev environment
No response
The text was updated successfully, but these errors were encountered: