Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need simple kubectl command to see cluster resource usage #17512

Closed
goltermann opened this issue Nov 19, 2015 · 105 comments
Closed

Need simple kubectl command to see cluster resource usage #17512

goltermann opened this issue Nov 19, 2015 · 105 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@goltermann
Copy link
Contributor

Users are getting tripped up by pods not being able to schedule due to resource deficiencies. It can be hard to know when a pod is pending because it just hasn't started up yet, or because the cluster doesn't have room to schedule it. http://kubernetes.io/v1.1/docs/user-guide/compute-resources.html#monitoring-compute-resource-usage helps, but isn't that discoverable (I tend to try a 'get' on a pod in pending first, and only after waiting a while and seeing it 'stuck' in pending, do I use 'describe' to realize it's a scheduling problem).

This is also complicated by system pods being in a namespace that is hidden. Users forget that those pods exist, and 'count against' cluster resources.

There are several possible fixes offhand, I don't know what would be ideal:

  1. Develop a new pod state other than Pending to represent "tried to schedule and failed for lack of resources".

  2. Have kubectl get po or kubectl get po -o=wide display a column to detail why something is pending (perhaps the container.state that is Waiting in this case, or the most recent event.message).

  3. Create a new kubectl command to more easily describe resources. I'm imagining a "kubectl usage" that gives an overview of total cluster CPU and Mem, per node CPU and Mem and each pod/container's usage. Here we would include all pods, including system ones. This might be useful long term alongside more complex schedulers, or when your cluster has enough resources but no single node does (diagnosing the 'no holes large enough' problem).

@davidopp davidopp added team/control-plane priority/backlog Higher priority than priority/awaiting-more-evidence. labels Nov 19, 2015
@davidopp
Copy link
Member

Something along the lines of (2) seems reasonable, though the UX folks would know better than me.

(3) seems vaguely related to #15743 but I'm not sure they're close enough to combine.

@chrishiestand
Copy link
Contributor

chrishiestand commented Sep 15, 2016

In addition to the case above, it would be nice to see what resource utilization we're getting.

kubectl utilization requests might show (maybe kubectl util or kubectl usage are better/shorter):

cores: 4.455/5 cores (89%)
memory: 20.1/30 GiB (67%)
...

In this example, the aggregate container requests are 4.455 cores and 20.1 GiB and there are 5 cores and 30GiB total in the cluster.

@xmik
Copy link

xmik commented Dec 19, 2016

There is:

$ kubectl top nodes
NAME                    CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
cluster1-k8s-master-1   312m         15%       1362Mi          68%       
cluster1-k8s-node-1     124m         12%       233Mi           11% 

@ozbillwang
Copy link

ozbillwang commented Jan 11, 2017

I use below command to get a quick view for the resource usage. It is the simplest way I found.

kubectl describe nodes

@tonglil
Copy link
Contributor

tonglil commented Jan 20, 2017

If there was a way to "format" the output of kubectl describe nodes, I wouldn't mind scripting my way to summarize all node's resource requests/limits.

@from-nibly
Copy link

here is my hack kubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests"

@jredl-va
Copy link

@from-nibly thanks, just what i was looking for

@tonglil
Copy link
Contributor

tonglil commented May 25, 2017

Yup, this is mine:

$ cat bin/node-resources.sh 
#!/bin/bash
set -euo pipefail

echo -e "Iterating...\n"

nodes=$(kubectl get node --no-headers -o custom-columns=NAME:.metadata.name)

for node in $nodes; do
  echo "Node: $node"
  kubectl describe node "$node" | sed '1,/Non-terminated Pods/d'
  echo
done

@k8s-github-robot
Copy link
Contributor

@goltermann There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

@kubernetes/sig-cli-misc

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Jun 10, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 10, 2017
@alok87
Copy link
Contributor

alok87 commented Jul 5, 2017

You can use the below command to find the percentage cpu utlisation of your nodes

alias util='kubectl get nodes | grep node | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo   {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''
Note: 4000m cores is the total cores in one node
alias cpualloc="util | grep % | awk '{print \$1}' | awk '{ sum += \$1 } END { if (NR > 0) { result=(sum**4000); printf result/NR \"%\n\" } }'"

$ cpualloc
3.89358%

Note: 1600MB is the total cores in one node
alias memalloc='util | grep % | awk '\''{print $3}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { result=(sum*100)/(NR*1600); printf result/NR "%\n" } }'\'''

$ memalloc
24.6832%

@alok87
Copy link
Contributor

alok87 commented Jul 21, 2017

@tomfotherby alias util='kubectl get nodes | grep node | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''

@tomfotherby
Copy link

@alok87 - Thanks for your aliases. In my case, this is what worked for me given that we use bash and m3.large instance types (2 cpu , 7.5G memory).

alias util='kubectl get nodes --no-headers | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''

# Get CPU request total (we x20 because because each m3.large has 2 vcpus (2000m) )
alias cpualloc='util | grep % | awk '\''{print $1}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*20), "%\n" } }'\'''

# Get mem request total (we x75 because because each m3.large has 7.5G ram )
alias memalloc='util | grep % | awk '\''{print $5}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*75), "%\n" } }'\'''
$util
ip-10-56-0-178.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  960m (48%)	2700m (135%)	630Mi (8%)	2034Mi (27%)

ip-10-56-0-22.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  920m (46%)	1400m (70%)	560Mi (7%)	550Mi (7%)

ip-10-56-0-56.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  1160m (57%)	2800m (140%)	972Mi (13%)	3976Mi (53%)

ip-10-56-0-99.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  804m (40%)	794m (39%)	824Mi (11%)	1300Mi (17%)

cpualloc 
48.05 %

$ memalloc 
9.95333 %

@nfirvine
Copy link

#17512 (comment) kubectl top shows usage, not allocation. Allocation is what causes the insufficient CPU problem. There's a ton of confusion in this issue about the difference.

AFAICT, there's no easy way to get a report of node CPU allocation by pod, since requests are per container in the spec. And even then, it's difficult since .spec.containers[*].requests may or may not have the limits/requests fields (in my experience)

@misterikkit
Copy link

/cc @misterikkit

@negz
Copy link
Contributor

negz commented Feb 21, 2018

Getting in on this shell scripting party. I have an older cluster running the CA with scale down disabled. I wrote this script to determine roughly how much I can scale down the cluster when it starts to bump up on its AWS route limits:

#!/bin/bash

set -e

KUBECTL="kubectl"
NODES=$($KUBECTL get nodes --no-headers -o custom-columns=NAME:.metadata.name)

function usage() {
	local node_count=0
	local total_percent_cpu=0
	local total_percent_mem=0
	local readonly nodes=$@

	for n in $nodes; do
		local requests=$($KUBECTL describe node $n | grep -A2 -E "^\\s*CPU Requests" | tail -n1)
		local percent_cpu=$(echo $requests | awk -F "[()%]" '{print $2}')
		local percent_mem=$(echo $requests | awk -F "[()%]" '{print $8}')
		echo "$n: ${percent_cpu}% CPU, ${percent_mem}% memory"

		node_count=$((node_count + 1))
		total_percent_cpu=$((total_percent_cpu + percent_cpu))
		total_percent_mem=$((total_percent_mem + percent_mem))
	done

	local readonly avg_percent_cpu=$((total_percent_cpu / node_count))
	local readonly avg_percent_mem=$((total_percent_mem / node_count))

	echo "Average usage: ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}

usage $NODES

Produces output like:

ip-REDACTED.us-west-2.compute.internal: 38% CPU, 9% memory
...many redacted lines...
ip-REDACTED.us-west-2.compute.internal: 41% CPU, 8% memory
ip-REDACTED.us-west-2.compute.internal: 61% CPU, 7% memory
Average usage: 45% CPU, 15% memory.

@ylogx
Copy link

ylogx commented Feb 21, 2018

There is also pod option in top command:

kubectl top pod

@nfirvine
Copy link

@ylogx #17512 (comment)

@shtouff
Copy link

shtouff commented Mar 4, 2018

My way to obtain the allocation, cluster-wide:

$ kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]}  {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"

It produces something like:

kube-system:heapster-v1.5.0-dc8df7cc9-7fqx6
  heapster:88m
  heapster-nanny:50m
kube-system:kube-dns-6cdf767cb8-cjjdr
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-6cdf767cb8-pnx2g
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-autoscaler-69c5cbdcdd-wwjtg
  autoscaler:20m
kube-system:kube-proxy-gke-cluster1-default-pool-cd7058d6-3tt9
  kube-proxy:100m
kube-system:kube-proxy-gke-cluster1-preempt-pool-57d7ff41-jplf
  kube-proxy:100m
kube-system:kubernetes-dashboard-7b9c4bf75c-f7zrl
  kubernetes-dashboard:50m
kube-system:l7-default-backend-57856c5f55-68s5g
  default-http-backend:10m
kube-system:metrics-server-v0.2.0-86585d9749-kkrzl
  metrics-server:48m
  metrics-server-nanny:5m
kube-system:tiller-deploy-7794bfb756-8kxh5
  tiller:10m

@kierenj
Copy link

kierenj commented Mar 13, 2018

This is weird. I want to know when I'm at or nearing allocation capacity. It seems a pretty basic function of a cluster. Whether it's a statistic that shows a high % or textual error... how do other people know this? Just always use autoscaling on a cloud platform?

@dpetzold
Copy link

dpetzold commented May 1, 2018

I authored https://github.com/dpetzold/kube-resource-explorer/ to address #3. Here is some sample output:

$ ./resource-explorer -namespace kube-system -reverse -sort MemReq
Namespace    Name                                               CpuReq  CpuReq%  CpuLimit  CpuLimit%  MemReq    MemReq%  MemLimit  MemLimit%
---------    ----                                               ------  -------  --------  ---------  ------    -------  --------  ---------
kube-system  event-exporter-v0.1.7-5c4d9556cf-kf4tf             0       0%       0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-mshh  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-bv59  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-ntfw  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-dns-autoscaler-244676396-xzgs4                20m     2%       0         0%         10Mi      0%       0         0%
kube-system  l7-default-backend-1044750973-kqh98                10m     1%       10m       1%         20Mi      0%       20Mi      0%
kube-system  kubernetes-dashboard-768854d6dc-jh292              100m    10%      100m      10%        100Mi     3%       300Mi     11%
kube-system  kube-dns-323615064-8nxfl                           260m    27%      0         0%         110Mi     4%       170Mi     6%
kube-system  fluentd-gcp-v2.0.9-4qkwk                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  fluentd-gcp-v2.0.9-jmtpw                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  fluentd-gcp-v2.0.9-tw9vk                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  heapster-v1.4.3-74b5bd94bb-fz8hd                   138m    14%      138m      14%        301856Ki  11%      301856Ki  11%

@aeciopires
Copy link

Hello!

I created this script and share it with you.

https://github.com/Sensedia/open-tools/blob/master/scripts/listK8sHardwareResources.sh

This script has a compilation of some of the ideas you shared here. The script can be incremented and can help other people get the metrics more simply.

Thanks for sharing the tips and commands!

@laurybueno
Copy link

For my use case, I ended up writing a simple kubectl plugin that lists CPU/RAM limits/reservations for nodes in a table. It also checks current pod CPU/RAM consumption (like kubectl top pods), but ordering output by CPU in descending order.

Its more of a convenience thing than anything else, but maybe someone else will find it useful too.

https://github.com/laurybueno/kubectl-hoggers

@Bec-k
Copy link

Bec-k commented Sep 20, 2020

Whoa, what a huge thread and still no proper solution from kubernetes team to properly calculate current overall cpu usage of a whole cluster?

@raul1991
Copy link

For those looking to run this on minikube , first enable the metric server add-on
minikube addons enable metrics-server
and then run the command
kubectl top nodes

@benjick
Copy link

benjick commented Nov 14, 2020

If you're using Krew:

kubectl krew install resource-capacity
kubectl resource-capacity
NODE                                          CPU REQUESTS   CPU LIMITS     MEMORY REQUESTS   MEMORY LIMITS
*                                             16960m (35%)   18600m (39%)   26366Mi (14%)     3100Mi (1%)
ip-10-0-138-176.eu-north-1.compute.internal   2460m (31%)    4200m (53%)    567Mi (1%)        784Mi (2%)
ip-10-0-155-49.eu-north-1.compute.internal    2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-162-84.eu-north-1.compute.internal    3860m (48%)    3900m (49%)    8399Mi (27%)      414Mi (1%)
ip-10-0-200-101.eu-north-1.compute.internal   2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-231-146.eu-north-1.compute.internal   2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-251-167.eu-north-1.compute.internal   4160m (52%)    3900m (49%)    4491Mi (14%)      660Mi (2%)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2021
@abelal83
Copy link

5 years and still open. I understand there are loads of tools available to check pod resource usage but honestly why not supply a standard one out of the box that's simple to use? Bundling grafana and prometheus with all the monitoring you could require would have been a god send for my team. We wasted months experimenting with different solutions. Please kube mainteners give us something out of the box and close this issue!

@tculp
Copy link

tculp commented Feb 12, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2021
@rokcarl
Copy link

rokcarl commented Mar 22, 2021

Even with all the tools above (I currently use kubectl-view-utilization), none of them can answer: "Can I run 3 replicas of an application pod that requires 1500 mCPU on my app nodes?" I have to do some number-crunching manually.

@manigandham
Copy link

Highly recommend another great tool called K9S: https://github.com/derailed/k9s

It's a separate CLI tool but uses the same config context for access and offers a lot of terminal/UI utility for monitoring and managing your cluster.

@j-zimnowoda
Copy link

kubectl describe nodes | grep "Allocated resources" -A 9

@eddiezane
Copy link
Member

From the long history of comments here it seems everyone has different expectations by the many issues and requests reported in this thread. This thread is more of a wiki now.

We'd be happy to see one of these plugins be proposed for upstreaming via a KEP. If someone wants to own this and bias for action with a decision, please open a KEP for discussion.

/close

@k8s-ci-robot
Copy link
Contributor

@eddiezane: Closing this issue.

In response to this:

From the long history of comments here it seems everyone has different expectations by the many issues and requests reported in this thread. This thread is more of a wiki now.

We'd be happy to see one of these plugins be proposed for upstreaming via a KEP. If someone wants to own this and bias for action with a decision, please open a KEP for discussion.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@champak
Copy link

champak commented Apr 15, 2021

In case folks are still listening in on this issue...Has anyone attempted using the standard resource usage apis like getrusage() for sw running inside containers/pods(). For cpu stats it does not seem that it will far off from what the node level cgroup would have to report.

mem stats seem more problematic. Unclear whether say /sys/fs/crgoup/memory/<> from inside a container really reflects memory usage correctly. Being able to monitor the resource usage from within an app (and then changing behavior in the app etc.) is a neat capability. Seems unclear when that will be available in k8s so casting around for workarounds.

@dguyhasnoname
Copy link

another tool to see resources node-wise, namespace-wise: https://github.com/dguyhasnoname/k8s-day2-ops/tree/master/resource_calcuation/k8s-toppur

@jackdpeterson
Copy link

My hack (on k8s 1.18; EKS)

kubectl describe nodes | grep 'Name:\|Allocated' -A 5 | grep 'Name\|memory'

@shawncao
Copy link

shawncao commented Nov 7, 2021

Lots of gems in this thread, :) thanks all! (wish some good writer could summarize and publish a quick sheet for it)

@valxv
Copy link

valxv commented Nov 10, 2021

@jackdpeterson answer adapted for Powershell :)

kubectl describe nodes | Select-String -Pattern 'Allocated resources:' -Context 0,5

@sanderdescamps
Copy link

sanderdescamps commented Nov 19, 2021

kubectl describe nodes | grep "Allocated resources" -A 9

Without counting the lines

kubectl describe nodes | awk '/Allocated resources/,/Events/' | grep -v "^Events:"

@solidsnack
Copy link

It's not perfect, but we can get a serviceable summary with sed:

:;  kubectl describe nodes |
    sed -n '/^Allocated /,/^Events:/ { /^  [^(]/ p; } ; /^Name: / p'
Name:               ip100.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         6773m (90%)        14300m (190%)
  memory                      12851005952 (40%)  18577645056 (57%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)
Name:               ip200.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         7082m (94%)        9500m (126%)
  memory                      26405455360 (83%)  24630806144 (77%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)
Name:               ip300.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         7153m (95%)        8800m (117%)
  memory                      27759605888 (86%)  22996783232 (71%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)

@panpan0000
Copy link
Contributor

panpan0000 commented Sep 18, 2022

Below script only works in the kubectl describe node values are in unit "m/Ki".

  # Assume unit CPU: m, Memory: Ki
  
  allocatable_cpu=$(kubectl describe node |grep Allocatable -A 5|grep cpu   | awk '{if (index($NF,"m") == 0) $NF=$NF*1000;sum+=$NF;} END{print sum;}')
  allocatable_mem=$(kubectl describe node |grep Allocatable -A 5|grep memory| awk '{sum+=$NF;} END{print sum;}')
  ## the allocated resource in request field
  allocated_req_cpu=$(kubectl describe node |grep Allocated -A 5|grep cpu   | awk '{sum+=$2; } END{print sum;}')
  allocated_req_mem=$(kubectl describe node |grep Allocated -A 5|grep memory| awk '{sum+=$2; } END{print sum;}')

  # finally, we got the resource spaces left
  usable_req_cpu=$(( allocatable_cpu - allocated_req_cpu ))
  usable_req_mem=$(( allocatable_mem - allocated_req_mem ))

  echo Usable       CPU request $usable_req_cpu   M
  echo Usable Memory request $usable_req_mem Ki
 

@julienlau
Copy link

Getting in on this shell scripting party. I have an older cluster running the CA with scale down disabled. I wrote this script to determine roughly how much I can scale down the cluster when it starts to bump up on its AWS route limits:

Updated version of this shell function

function kusage() {
    # Function returning resources usage on current kubernetes cluster
	local node_count=0
	local total_percent_cpu=0
	local total_percent_mem=0

    echo "NODE\t\t CPU_allocatable\t Memory_allocatable\t CPU_requests%\t Memory_requests%\t CPU_limits%\t Memory_limits%\t"
	for n in $(kubectl get nodes --no-headers -o custom-columns=NAME:.metadata.name); do
		local requests=$(kubectl describe node $n | grep -A2 -E "Resource" | tail -n1 | tr -d '(%)')
        local abs_cpu=$(echo $requests | awk '{print $2}')
		local percent_cpu=$(echo $requests | awk '{print $3}')
        local node_cpu=$(echo $abs_cpu $percent_cpu | tr -d 'mKi' | awk '{print int($1/$2*100)}')
        local allocatable_cpu=$(echo $node_cpu $abs_cpu | tr -d 'mKi' | awk '{print int($1 - $2)}')
        local percent_cpu_lim=$(echo $requests | awk '{print $5}')
        local requests=$(kubectl describe node $n | grep -A3 -E "Resource" | tail -n1 | tr -d '(%)')
        local abs_mem=$(echo $requests | awk '{print $2}')
		local percent_mem=$(echo $requests | awk '{print $3}')
        local node_mem=$(echo $abs_mem $percent_mem | tr -d 'mKi' | awk '{print int($1/$2*100)}')
        local allocatable_mem=$(echo $node_mem $abs_mem | tr -d 'mKi' | awk '{print int($1 - $2)}')
        local percent_mem_lim=$(echo $requests | awk '{print $5}')
		echo "$n\t ${allocatable_cpu}m\t\t\t ${allocatable_mem}Ki\t\t ${percent_cpu}%\t\t ${percent_mem}%\t\t\t ${percent_cpu_lim}%\t\t ${percent_mem_lim}%\t"

		node_count=$((node_count + 1))
		total_percent_cpu=$((total_percent_cpu + percent_cpu))
		total_percent_mem=$((total_percent_mem + percent_mem))
	done

	local avg_percent_cpu=$((total_percent_cpu / node_count))
	local avg_percent_mem=$((total_percent_mem / node_count))

	echo "Average usage (requests) : ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests