New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-state-metrics provides wrong memory metrics for pods #748
Comments
You are most likely scraping both the |
I cloned the kube-prometheus repo from github and than I did:
I did use kustomize only to add an ingress setup and add some Grafana environment variables to support SMTP. The thing which I did not understand is that So the most of my time I invested to optimize my java containers. After uninstalling kube-prometheus and installing the metrics-server I can see now that everything is fine with my pods and also with my cluster. |
Sorry for taking so long to reply, but could you elaborate in what way they seem "wrong"? Also which version of kube-prometheus are you installing? |
I checked out the master branch from github. This was a month ago. |
Would you be open to trying again and sharing some data with us so we could debug the situation? |
I'm sorry, I can't test this again at the moment because my environment can't be changed now. |
I fear that we must close this issue because I can not provide more information or do a new testing. Do you have any suspicions about what might have been the root of my problem? |
No problem! Please feel free to open a new issue should you encounter anything in the future! :) |
I am having this issue with the double size of the memory with
I can reproduce this with a container running a single process with about 7GB memory:
The grafana dashboard shows me that there are about 14 GB memory used. When I use TICK stack with the docker input plugin, the usage of about 7GB can be seen. Also have a look at the host memory, which increased for 7 GB only. In prometheus/kube-state-metrics, you can also reproduce this by having a process that uses more than the half of the memory allowed by kubernetes limit: For example:
result: via TICK stack: via kube-prometheus-stack: This is impossible... Information about the kubernetes Node which is running this:
/reopen |
/reopen please |
Ran into the same issue. I believe this is because one tool measures the total memory usage while another only measures the working set. |
I'm facing same issue. Client Version: v1.28.2 If anyone has resolved it, kindly let me know . Using below command . |
What happened?
I installed latest version of kube-prometheus. with kube-state-metrics 1.9.7 on my self managed kubernetes cluster
Did you expect to see some different?
I expected that the metric service will provide metrics about pod memory consumption.
I verified the metrics with:
$ kubectl top pod my-app
and on the corresponding worker node with:
$ docker stats
It shows that the metric data concerning memory usage of a pod shows the double size as it should be.
How to reproduce it (as minimally and precisely as possible):
You can compare the metric data with
kubectl top
anddocker stats
After I uninstalled kube-prometheus stack and installed instead the metric-server all memory is displayed correctly and kubernetes scheduling was again as expected
Does anybody know how this can happen and what I can do about this issue?
See also discussions here:
https://stackoverflow.com/questions/64582065/why-is-openjdk-docker-container-ignoring-memory-limits-in-kubernetes
https://stackoverflow.com/questions/64440319/why-java-container-in-kubernetes-takes-more-memory-as-limits
Environment
Debian Buster
Kubernetes 1.19.3
kube-state-metrics 1.9.7
The text was updated successfully, but these errors were encountered: