You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug description
Exponential CPU Utilization in Istio Proxy after 1.3.*.
When upgrading from Istio 1.2.5 to 1.3.0, I noticed a dramatic increase in CPU Utilization in the istio-ingressgateway. I admittedly didn't look directly into it after the upgrade, just assumed there were some underlying changes that required a bit more CPU.
Then when deploying a new workload to the cluster, it failed due to unavailable CPU. I then ran a top on the pods and noticed that the HPA had scaled the pods up to the 5 maximum and each reaching close to +1000m CPU rather than the 10-100m that we saw normally.
Following some advice from a post on discuss.istio.io, I downgraded the istio-proxyV2 image to 1.2.8 and saw the CPU utilization drop back to the norm of a single pod and less than 100m CPU.
I don't want to have to keep an old version of this image going forward with upgrades. Is there any explanation of the increase?
Expected behavior
CPU Utilization remains the same or only slightly increases Steps to reproduce the bug
Upgrade from 1.2* to 1.3.* Version (include the output of istioctl version --remote and kubectl version) istioctl version --remote
Bug description
Exponential CPU Utilization in Istio Proxy after 1.3.*.
When upgrading from Istio 1.2.5 to 1.3.0, I noticed a dramatic increase in CPU Utilization in the istio-ingressgateway. I admittedly didn't look directly into it after the upgrade, just assumed there were some underlying changes that required a bit more CPU.
Then when deploying a new workload to the cluster, it failed due to unavailable CPU. I then ran a
top
on the pods and noticed that the HPA had scaled the pods up to the 5 maximum and each reaching close to +1000m CPU rather than the 10-100m that we saw normally.Following some advice from a post on discuss.istio.io, I downgraded the
istio-proxyV2
image to1.2.8
and saw the CPU utilization drop back to the norm of a single pod and less than 100m CPU.I don't want to have to keep an old version of this image going forward with upgrades. Is there any explanation of the increase?
Expected behavior
CPU Utilization remains the same or only slightly increases
Steps to reproduce the bug
Upgrade from 1.2* to 1.3.*
Version (include the output of
istioctl version --remote
andkubectl version
)istioctl version --remote
kubectl version
How was Istio installed?
Helm (template)
Environment where bug was observed (cloud vendor, OS, etc)
Azure AKS
The text was updated successfully, but these errors were encountered: