Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(helm): default proxy-init resource requests to proxy values #12741

Merged
merged 7 commits into from
Jun 24, 2024

Conversation

mateiidavid
Copy link
Member

@mateiidavid mateiidavid commented Jun 17, 2024

Default values for linkerd-init (resources allocated) are not always the right fit. We offer default values to ensure proxy-init does not get in the way of QOS Guaranteed (linkerd-init resource limits and requests cannot be configured in any other way).

Instead of using default values that can be overridden, we can re-use the proxy's configuration values. For the pod to be QOS Guaranteed, the values for the proxy have to be set any way. If we re-use the same values for proxy-init we can ensure we'll always request the same amount of CPU and memory as needed.

  • linkerd-init now defaults to the proxy's values
  • when the proxy has an annotation configuration for resource requests, it also impacts linkerd-init
  • Helm chart and docs have been updated to reflect the missing values.
  • tests now no longer use ProxyInit.Resources

UPGRADE NOTE:

  • Deprecates proxyInit.resources field in the Helm values.
    • It will be a no-op if specified (no hard failures)

Closes #11320

Notes to reviewers:

You can test by building and then installing Linkerd:

$ bin/docker-build
$ bin/image-load --k3d

Test cases

  1. Installing with overrides for control plane:
:; bin/linkerd install \
       --set proxyInit.resources.cpu.limit="50m" \
       --set proxyInit.resources.memory.request="50" \
       --set destinationProxyResources.cpu.limit="100m" \
       --set identityProxyResources.memory.request="200" \
       > tmp.yaml
       | k apply -f - 

We expect to see:

  • destination proxy init with cpu set to 100m
  • proxyInit.resources is a no op (no proxy init should have 50m cpu and/or 50 mem)
  • identity should have 200 mem
  • proxy injector has no proxy init config
:; k get po -n linkerd linkerd-destination-869896d648-bhr6d -o yaml | yq '.spec.initContainers[] | select ( .name == "linkerd-init" ).resources'
limits:
  cpu: 100m
requests:
  cpu: 100m

:; k get po -n linkerd linkerd-destination-869896d648-bhr6d -o yaml | yq '.spec.containers[] | select ( .name == "linkerd-proxy" ).resources'
limits:
  cpu: 100m
requests:
  cpu: 100m

:; k get po -n linkerd linkerd-identity-595d6c685d-m2jtj -o yaml | yq '.spec.containers[] | select ( .name == "linkerd-proxy" ).resources'
requests:
  memory: "200"

:; k get po -n linkerd linkerd-identity-595d6c685d-m2jtj -o yaml | yq '.spec.initContainers[] | select ( .name == "linkerd-init" ).resources'
requests:
  memory: "200"

:; k get po -n linkerd linkerd-proxy-injector-77bd946cbc-sv2sb -o yaml | yq '.spec.initContainers[] | select ( .name == "linkerd-init" ).resources'
{}

:; k get po -n linkerd linkerd-proxy-injector-77bd946cbc-sv2sb -o yaml | yq '.spec.containers[] | select ( .name == "linkerd-proxy" ).resources'
{}

Now, if we update and set proxy resources we should see other deployments inherit it:

:; bin/linkerd upgrade --set proxy.resources.cpu.limit='10m' | k apply -f -

We expect:

  • To see a pod with 10m cpu requests and limits for proxy and proxy-init
:; kgp
NAME                    READY   STATUS     RESTARTS   AGE
nginx-cbdccf466-fr2q4   1/1     Running    0          15s
nginx-d66c79585-nzt7b   0/2     Init:0/1   0          3s

:; k get po nginx-d66c79585-nzt7b -o yaml | yq '.spec.initContainers[] | select ( .name == "linkerd-init" ).resources'
limits:
  cpu: 10m
requests:
  cpu: 10m

:; k get po nginx-d66c79585-nzt7b -o yaml | yq '.spec.containers[] | select ( .name == "linkerd-proxy" ).resources'
limits:
  cpu: 10m
requests:
  cpu: 10m

And finally, we can override this with an annotation. We expect to see:

  • Deployment inherit annotation value
:; k annotate ns default config.linkerd.io/proxy-cpu-limit=30m
namespace/default annotated

:; k rollout restart deploy
deployment.apps/nginx restarted

:; kgp
NAME                     READY   STATUS     RESTARTS   AGE
nginx-d66c79585-nzt7b    2/2     Running    0          88s
nginx-86bd875544-fhjjw   0/2     Init:0/1   0          2s

:; kgp nginx-86bd875544-fhjjw -o yaml |  yq '.spec.initContainers[] | select ( .name == "linkerd-init" ).resources'
limits:
  cpu: 30m
requests:
  cpu: 30m

:; k get po nginx-86bd875544-fhjjw -o yaml |  yq '.spec.containers[] | select ( .name == "linkerd-proxy" ).resources'
limits:
  cpu: 30m
requests:
  cpu: 30m

Default values for `linkerd-init` (resources allocated) are not always
the right fit. We offer default values to ensure proxy-init does not get
in the way of QOS Guaranteed (`linkerd-init` resource limits and
requests cannot be configured in any other way).

Instead of using default values that can be overridden, we can re-use
the proxy's configuration values. For the pod to be QOS Guaranteed, the
values for the proxy have to be set any way. If we re-use the same
values for proxy-init we can ensure we'll always request the same amount
of CPU and memory as needed.

* `linkerd-init` now defaults to the proxy's values
* when the proxy has an annotation configuration for resource requests,
  it also impacts `linkerd-init`
* Helm chart and docs have been updated to reflect the missing values.
* tests now no longer use `ProxyInit.Resources`

UPGRADE NOTE:
- Deprecates `proxyInit.resources` field in the Helm values.
  - It will be a no-op if specified (no hard failures)

Signed-off-by: Matei David <matei@buoyant.io>
@mateiidavid mateiidavid requested a review from a team as a code owner June 17, 2024 15:45
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Comment on lines +128 to +133
{{- if not (empty .Values.destinationProxyResources) }}
{{- $c := dig "cores" .Values.proxy.cores .Values.destinationProxyResources }}
{{- $_ := set $tree.Values.proxy "cores" $c }}
{{- $r := merge .Values.destinationProxyResources .Values.proxy.resources }}
{{- $_ := set $tree.Values.proxy "resources" $r }}
{{- end }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, although this move wasn't strictly necessary it's cleaner to have these on a more global "scope" given it applies to both the proxy and linkerd-init, right? If so, why not doing the same for the identity workload?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The simple answer is...I forgot 😅 good spot! fixed now.

Copy link
Member

@alpeb alpeb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed instructions; this looks good to me! 👍

@mateiidavid mateiidavid merged commit f05d1e9 into main Jun 24, 2024
40 checks passed
@mateiidavid mateiidavid deleted the matei/init-container-resources branch June 24, 2024 11:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The linkerd-init initContainer CPU request is too high
2 participants