Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory report showing error #52

Closed
donifer opened this issue Nov 26, 2019 · 8 comments · Fixed by #53
Closed

Memory report showing error #52

donifer opened this issue Nov 26, 2019 · 8 comments · Fixed by #53
Labels
bug Something isn't working

Comments

@donifer
Copy link

donifer commented Nov 26, 2019

Describe the bug
I'm probably doing something wrong, but wanted to clarify why I could be getting the following output:

❯ kubectl view-utilization
Resource    Requests  %Requests      Limits  %Limits  Allocatable  Schedulable  Free
CPU             1834         91        1424       71         2002          168   168
Memory    3506438144        Err  3007315968      Err            0            0     0

System details

  • Operating system that client is running: macOS 10.14.6
  • kubectl client and server version kubectl version
❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:34Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
  • view-utilization plugin version kubectl view-utilization -v
❯ kubectl view-utilization -v
v0.3.1
@donifer donifer added the bug Something isn't working label Nov 26, 2019
@etopeter
Copy link
Owner

etopeter commented Nov 26, 2019

Thank you for reporting bug. It will be helpful if you run debug command and share your output.

kubectl get nodes -o=jsonpath="{range .items[*]}{.status.allocatable.memory}{'\n'}{end}" 

This is the same (simplified) snippet that grabs node data. I think status.allocatable.memory must be showing something unexpected for some nodes.
This is most likely be because of node memory being large or not reporting as expected and not user's fault.

@donifer
Copy link
Author

donifer commented Nov 26, 2019

Thanks for the quick reply! Here is the requested output:

100Mi   150Mi
32Mi    64Mi
256Mi   <no value>
120Mi   200Mi
64Mi    100Mi
120Mi   200Mi
64Mi    64Mi
64Mi    64Mi
64Mi    64Mi
256Mi   <no value>
32Mi    64Mi
200Mi   250Mi
32Mi    64Mi
32Mi    64Mi
32Mi    32Mi
32Mi    64Mi
32Mi    64Mi
<no value>      <no value>
300Mi   <no value>
300Mi   <no value>
70Mi    170Mi
70Mi    170Mi
20Mi    <no value>
50Mi    <no value>
20Mi    <no value>
50Mi    <no value>
80Mi    100Mi
80Mi    100Mi
125Mi   <no value>
125Mi   <no value>
<no value>      <no value>
16Mi    32Mi
16Mi    32Mi
<no value>      <no value>
64Mi    100Mi
20Mi    20Mi
150Mi   200Mi
200Mi   300Mi
120Mi   200Mi

@etopeter
Copy link
Owner

etopeter commented Nov 26, 2019

Thank you. I edited my above reply command to show node allocatable instead pod requests but you must have got the older command. In the beginning I thought the issue was with pod resources but I realized its nodes that are not reporting right. I hope it's not too much to ask if you could run

kubectl get nodes -o=jsonpath="{range .items[*]}{.status.allocatable.memory}{'\n'}{end}"

The issues is with allocatable. The above output looks fine.

@donifer
Copy link
Author

donifer commented Nov 26, 2019

Not a problem at all, appreciate your help.

❯ kubectl get nodes -o=jsonpath="{range .items[*]}{.status.allocatable.memory}{'\n'}{end}"

2250Mi
1574Mi

@etopeter
Copy link
Owner

etopeter commented Nov 26, 2019

Looks like one node is missing allocatable. There may be issue on the node itself.
You can add node name to output

kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.allocatable.memory}{'\n'}{end}" 

Then you can do describe node and see what's the status on it. Specifically take interest at the middle Allocatable:. I expect that there is some issue why kubelet can't report allocatable resources. If you don't want to share output it's fine. It will have your node IP's and other pod running on that node. If you want you can redact that information or see if you can spot issue yourself and share your findings.

Regardless I think we need some better error handling on computation side.

@donifer
Copy link
Author

donifer commented Nov 26, 2019

I can't seem to find anything strange:

Node 1:

Capacity:
 cpu:                1
 ephemeral-storage:  51572172Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             2043372Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  51572172Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             1574Mi
 pods:               110

Node 2:

Capacity:
 cpu:                1
 ephemeral-storage:  61893400Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3075652Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  61893400Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             2250Mi
 pods:               110

If by any chance you are in Kubernetes slack server, feel free to DM me and I'd be happy to share the full output.

@etopeter
Copy link
Owner

Thank you for sharing output. This is actually enough info to debug. Memory is reporting as memory: 2250Mi and code doesnt know how to handle that

NR==FNR && $3 ~ /Ki?$/ { alloc_mem+=$3*1024; next };

Fix needs to be made to add Mi computation.

@donifer
Copy link
Author

donifer commented Nov 26, 2019

Thanks for all the debugging help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants