-
Notifications
You must be signed in to change notification settings - Fork 7.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting istio sidecar proxy resource request/limit #16126
Comments
You can set requests per proxy with |
Thanks for information @howardjohn ! |
I would also like this feature. I know what needs to be done to get this to work. Should I open a PR? |
I remember @ostromart mentioning this, I forget if it was "we should do this" or "we should not do this". If you do, you should submit the PR against https://github.com/istio/installer |
I think this is absolutely a needed feature. I want to get guaranteed cores on kubernetes nodes so that my low latency applications don't get throttled by kubernetes. It looks like me and another user have the same concern. I think we could talk about it at the next community meeting perhaps? As far as implementation, that's actually not what I was thinking. this is what I think we should modify to do it on a per pod basis: https://github.com/istio/istio/blob/master/install/kubernetes/helm/istio/files/injection-template.yaml Please lmk if I am incorrect. I will open a PR as soon as I get confirmation. |
@howardjohn howardjohn ^ |
This feature (per-pod resource limits for sidecar) would be useful for me as well. My concern is that when I override requests with annotation, sidecar limits are getting lost, which may lead to uncontrolled resource usage by sidecar, which I'd like to avoid. |
We would love to have a PR adding this. We should approach this by adding explicit flags to set resource limits, in addition to the current flags that sets resource requests. We should be able to both set a global default limit and a per-pod limit, like we do with resource requests today. @mafarinacci we'll be happy to review a PR as soon as you've got it ready. |
Hi, Please let me know, if this feature is released in ISTIO 1.3.0 or still its in progress. |
It did not make 1.3, you can set requests but not limits |
@sebarys As far as I understood this annotation is set in the istio-sidecar-injector template. So, you can add any annotation that you want there. For example, I changed my template in this way:
Thus, if I want fully custom memory/cpu request limits, I add following annotations to the deployment:
Warning: This example doesn't use global-options!!! Be careful, when you change your side-car template. |
emmm, can we set limits same as requests? |
I have tried to modify istio-sidecar-injector.yaml as below, and then add annotation in application deployment, but the resource can work, and limits can not be added in istio-proxy, any comment?
|
I would also love for an annotation that helps to set the limit per-pod, but what about adding resource requests and limits to the Sidecar custom resource? It seems useful to be able to control the requests and limits using a workload selector. |
@sjmiller609 that seems like it could be a pretty good UX but would be a pretty invasive change; right now the sidecar injection doesn't know about and CRs, so it doesn't read Sidecar resource. It could and maybe should, but doesn't today |
I actually like that idea more as I think about it |
Just so this doesn't get buried, I opened #19555 to track that idea. Its unlikely to be done in the short term though |
That's ok, I really appreciate you taking the time to consider it! |
to set global restrictions... on sidecars --set values.global.proxy.resources.limits.memory="300Mi" |
@prageethw are there any default values for memory request/limit with Istio 4.1? if so what are those? |
I think its actually
and btw - we do not have the same problem with sidecars whose pods and services have no accompanied virtualservice/gateway. |
@Arnold1 you can find all defaults here: https://istio.io/docs/reference/config/installation-options/#global-options |
Having the option to set the limits via an annotation would help us as well. If there is a limitrange and quota applied on the namespace and you override the CPU and memory requests, the limits are not set. This will cause pods to not start with errors such as
Usually when not setting requests/limits the defaults are applied, but perhaps there's an ordering issue with all the controllers modifying the spec that is causing this. Patching the sidecar injector template ourselves is not ideal as this templates can change between versions. |
* Add annotations for setting cpu/memory limits on sidecar When a limitrange is active, not setting the limits will result in an error. This patch will allow setting limits for the sidecar. Fixes #16126 Change-Id: I031d510812a867c2790eaa9af2f51145b4e5f006 * update test Change-Id: Ic610fc6e9c9a6d814083bd7434704592a5c8f92c
* Add annotations for setting cpu/memory limits on sidecar When a limitrange is active, not setting the limits will result in an error. This patch will allow setting limits for the sidecar. Fixes istio#16126 Change-Id: I031d510812a867c2790eaa9af2f51145b4e5f006 * update test Change-Id: Ic610fc6e9c9a6d814083bd7434704592a5c8f92c
* Add annotations for setting cpu/memory limits on sidecar When a limitrange is active, not setting the limits will result in an error. This patch will allow setting limits for the sidecar. Fixes #16126 * add auto generated files from make gen
@howardjohn - any chance of getting #22395 into 1.4 too? |
Any idea when PR #23053 might land in 1.4? |
@sebarys |
Do we have anything yet for istio-proxy, I tried adding annotation my particular deployment but limits remained same also requests |
As of Istio 1.10, the solution is already implemented and its pretty similar to @pavelzhurov post. The istio-sidecar-inejctor template ConfigMap has the limits and resources parameters that allows us to set these parameters on a per-deployment basis. Those params are annotations at the deployment manifest level and look like this:
With this config, you may be able to have QoS Guaranteed class for your pods. |
Hi ricarhincapie, Thanks for your suggestion, you are right after setting these annotations, the istio-proxy container will have the same cpu/memory request and limit. But note that the pod is still not a QOS Guaranteed pod, because the istio-init has burstable settings which will let this pod finally becomes a QOS Burstable pod. Also some discussion about the initcontainer here k8s-initcontainer-discussion. I would like to find a way to set the resource requests/limits for the istio-init container as well :) Best, |
I thought the init container uses identical resources as sidecar?
…On Thu, Sep 8, 2022, 11:58 PM Peini Liu ***@***.***> wrote:
Hi ricarhincapie,
Thanks for your suggestion, you are right after setting these annotations,
the istio-proxy container will have the same cpu/memory request and limit.
But note that the pod is still *not a QOS Guaranteed* pod, because the
istio-init has burstable settings which will let this pod finally becomes a *QOS
Burstable* pod.
Also some discussion about the initcontainer here
k8s-initcontainer-discussion
<kubernetes/kubernetes#93282>.
I would like to find a way to set the resource requests/limits for the
istio-init container as well :)
Best,
Peini
—
Reply to this email directly, view it on GitHub
<#16126 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEYGXPXEOI36HWWBZECMILV5LN2LANCNFSM4IKHG5UQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Yes, I confirm with the setting the pod can be QoS Guaranteed. initcontainer has the same resources as sidecar. Thanks all. |
When a limitrange is active, not setting the limits will result in an error. This patch will allow setting limits for the sidecar. Fixes istio/istio#16126
Hello :)
Describe the feature request
Is there an option to set istio proxy sidecar request/limits per application (Pod)? For some of our applications we would like to use Guaranteed QoS (https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) that require setting
limit == request
for all Pod containers. Without such functionality we're loosing this feature.Describe alternatives you've considered
We've considered using global config value (https://istio.io/docs/reference/config/installation-options/#global-options) but if I understood correctly it could be used only during Istio installation, not applied for each Pod separately what don't match described case.
Some of our applications have bigger traffic so need higher requested resource values, other lower. As istio-proxy resources are somehow proportional to application container resources (e.g. one application is handling much bigger traffic than another -> it will require more resources also on istio-proxy sidecar layer) I would like to have an option to align resources assigned to isito-proxy per Pod.
Affected product area (please put an X in all that apply)
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[X] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
The text was updated successfully, but these errors were encountered: