Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubectl: take pod overhead into account for node info #81929

Merged
merged 1 commit into from Aug 31, 2019

Conversation

@egernst
Copy link
Contributor

commented Aug 26, 2019

What type of PR is this?
/kind feature

What this PR does / why we need it:

kubectl doesn't take Pod Overhead into account.

Special notes for your reviewer:

Unfortunately, I do not see any examples of 'feature gates' with CLI tools. Because of this, I am not sure, but think it would be appropriate to include Pod.Spec.Overhead if it is defined in the Pod.Spec (which should only be the case if the feature is enabled, or if someone is manually adding without the admission controller).

Add PodOverhead awareness to kubectl

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

See https://github.com/kubernetes/enhancements/tree/master/keps/sig-node

@egernst

This comment has been minimized.

Copy link
Contributor Author

commented Aug 26, 2019

/cc @tallclair -- PTAL. I think from a user perspective, if PodOverhead is being utilized it is crucial that this shows up in kubectl node information. I'm concerned there isn't a way (I saw) for Feature gates to be utilized here. In lieu of this, I am taking the Overhead defined in Pod.Spec if it is defined. I'm not sure if there's a way to do this on the node, instead of in client application, to avoid this. WDYT?

Also, I am concerned that this is yet another almost identical that is duplicated from the resource helpers in k/k.

@egernst egernst changed the title Kubectl overhead Kubectl: take pod overhead into account for node info Aug 26, 2019
@tallclair

This comment has been minimized.

Copy link
Member

commented Aug 27, 2019

clean up the debug statements, then lgtm

@egernst egernst force-pushed the egernst:kubectl-overhead branch from 204a4bb to 5b519ec Aug 27, 2019
@k8s-ci-robot k8s-ci-robot added size/S and removed size/M labels Aug 27, 2019
@tallclair

This comment has been minimized.

Copy link
Member

commented Aug 27, 2019

Re: feature gates, I don't think there's a way to apply feature gates client side. I agree that checking for a non-nil field is a sufficient proxy.

I think there's some work underway to move the describe logic server-side, which would let you check feature gates. I don't know the status of that though.

@egernst egernst force-pushed the egernst:kubectl-overhead branch from 5b519ec to 7922d73 Aug 27, 2019
Signed-off-by: Eric Ernst <eric.ernst@intel.com>
@egernst

This comment has been minimized.

Copy link
Contributor Author

commented Aug 27, 2019

@egernst

This comment has been minimized.

Copy link
Contributor Author

commented Aug 27, 2019

I think this should be part of 1.16 milestone.

Copy link
Member

left a comment

LGTM. Just some questions:

@@ -40,6 +42,18 @@ func PodRequestsAndLimits(pod *corev1.Pod) (reqs, limits corev1.ResourceList) {
maxResourceList(reqs, container.Resources.Requests)
maxResourceList(limits, container.Resources.Limits)
}

// Add overhead for running a pod to the sum of requests and to non-zero limits:
if pod.Spec.Overhead != nil {

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Aug 27, 2019

Member

If the feature gate is disabled in apiserver, we have logic setting it to nil, right?

This comment has been minimized.

Copy link
@egernst

egernst Aug 28, 2019

Author Contributor

No.

If someone is explicitly setting this in the Pod.Spec when the feature gate is not enabled, the value will persist. if the feature gate is enabled, the RuntimeClass admission controller will validate the Pod.Spec to make sure the defined overhead matches what is defined in a given RuntimeClass. If it doesn't match, the pod is rejected.

This comment has been minimized.

Copy link
@tallclair

tallclair Aug 28, 2019

Member

We are dropping it server-side if the feature gate is disabled: https://github.com/kubernetes/kubernetes/blob/master/pkg/api/pod/util.go#L399-L402

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Aug 29, 2019

Member

That makes sense then.

if pod.Spec.Overhead != nil {
addResourceList(reqs, pod.Spec.Overhead)

for name, quantity := range pod.Spec.Overhead {

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Aug 27, 2019

Member

It seems the rule here is:

  • add overhead to reqs no matter each container has req defined or not.
  • add overhead to limits only when at least one container has limit defined.

Is it the case?

This comment has been minimized.

Copy link
@egernst

egernst Aug 28, 2019

Author Contributor

Correct.

@egernst

This comment has been minimized.

Copy link
Contributor Author

commented Aug 28, 2019

Ping @tallclair @Huang-Wei — AFAIU code freeze is EOD Thursday, so would love help getting this in, or in a state that as acceptable. Thanks!

@tallclair

This comment has been minimized.

Copy link
Member

commented Aug 28, 2019

/lgtm

@tallclair

This comment has been minimized.

Copy link
Member

commented Aug 28, 2019

/assign @seans3

@seans3 mind approving this?

/milestone v1.16
/priority important-soon

@Huang-Wei

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

/lgtm

We need kubectl owners to approve this.

@seans3

This comment has been minimized.

Copy link
Contributor

commented Aug 29, 2019

/lgtm
/approve

It would be nice to add a unit test in the future

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Aug 29, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: egernst, seans3

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tallclair

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

Follow up action item: deduplicate this code with

func PodRequestsAndLimits(pod *v1.Pod) (reqs, limits v1.ResourceList) {

k8s.io/kubectl can't depend on k8s.io/kubernetes, so both need to be extracted to a common location.

@nikopen

This comment has been minimized.

Copy link
Member

commented Aug 31, 2019

/kind feature
/lgtm

@fejta-bot

This comment has been minimized.

Copy link

commented Aug 31, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit 3d17fd5 into kubernetes:master Aug 31, 2019
24 checks passed
24 checks passed
cla/linuxfoundation egernst authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-conformance-kind-ipv6 Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.