-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cri_stats_provider: do not consider exited containers when calculating cpu usage #83504
cri_stats_provider: do not consider exited containers when calculating cpu usage #83504
Conversation
Welcome @ashleykasim! |
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @ashleykasim. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc |
result = append(result, refs[0]) | ||
continue | ||
} | ||
found := false | ||
for i := 0; i < len(refs); i++ { | ||
if refs[i].State == runtimeapi.ContainerState_CONTAINER_RUNNING { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: “terminated” and “not running” aren’t the same thing: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the suggestion here? Rename the function? From a logical perspective, only "running" containers are consuming resources, so only running containers should be used when calculating pod resource usage. "Waiting" containers should also be excluded, as they are also not consuming resources.
/priority important-soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome to the k8s project :) thank you for the diff!
/ok-to-test
I agree with @vllry's nit, but other than that, this makes a lot of sense to me.
/assign @dashpole |
// It only removes a terminated container when there is a running instance | ||
// of the container. | ||
// removeTerminatedContainers removes all terminated containers since they should | ||
// not be used for usage calculations. | ||
func removeTerminatedContainers(containers []*runtimeapi.Container) []*runtimeapi.Container { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some more digging, and I think a lot of this function can be removed.
I can only find 2 references to removeTerminatedContainers
, and neither use the fact that the output is sorted (by container ID and by create time). Because of that, I think we can remove the whole map and sort process.
@mattjmcnaughton am I barking up the wrong tree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was originally added in the cadvisor_stats_provider
as a fix for #47853. Its primary functionality was for deduping due to a race condition between cadvisor and cgroup removal. If deduping is no longer necessary (fwiw I have not observed this race condition happening between crio and cgroups), this function can indeed be greatly simplified. I did some searching through the commit history and was unable to determine why the deduping functionality was ported forward from cadvisor_stats_provider
when cri_stats_provider
was added. I have preserved the deduping logic to minimize the impact of this diff, but personally I feel this can probably be removed as well.
8a9b9a3
to
de23437
Compare
de23437
to
4b339e6
Compare
/retest |
2 similar comments
/retest |
/retest |
/assign @yujuhong |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ashleykasim, dashpole The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
What this PR does / why we need it:
When using the cri_stats_provider with cri-o, the current implementation of
remove_terminated_containers()
causes the kubelet to report inaccurate stats for pods that have terminated and restarted containers, for example during a crashloop, such as:This in turn causes undesired behavior in the HPA, as it reports seeing an unrealistically high cpu usage and scales up to max:
This is because the calculation for usageNanoCores subtracts the cached
UsageCoreNanoSeconds
(a nonzero number) from the newly observedUsageCoreNanoSeconds
(which is set to zero for an exited container), resulting in a nonsensical number. Note that the check for zero or negative interval will not trip in this case, as the timestamp will continue to be nonzero and increasing.Removing all exited containers from the cpu usage calculation fixes this issue. All exited containers will report 0 for
UsageNanoCoreSeconds
and therefore will never be significant when calculating cpu usage for the pod, so they should be removed from consideration.Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: