-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InPlacePodVerticalScaling does not meet the requirement of qosClass being equal to Guaranteed after shrinking the memory #124786
Comments
/sig node |
i can't reproduce this issue in the same 1.29 version, my pod.yaml:
then patch pod:
i got this error:
|
you can test with memory resource. |
could you please provide your patch command |
/cc @esotsal |
I reproduced this issue on v1.30.
Then, I noticed the
Though I'm not familiar with this feature, I guess the runtime is still trying to resize the pod. This issue seems caused because the updated value is too small to be practical. After this situation, I patched another update with a practical value:
Then, the pod was resized with the later patch:
This issue can be solved by another patch. So, I don't think this causes a big problem. |
Nice catch, tried also with latest K8s , interestingly lower than 14Mi in my tests it failed as well, not sure where the bug is in K8s or outside K8s ( container runtime ). Definitely worth checking it more deep, thanks for sharing @hshiina , it seems it is a bug somewhere
|
As I put a comment here, the resizing got failed in kubelet:
|
/triage accepted |
/cc @tallclair @vinaykul Perhaps root cause is attempt of Pod memory resize to a lower value than currently allocated memory? Perhaps InPlacePodVerticalScaling should handle this corner case either proactively or somehow after ? It will fail anyhow by Linux kernel such an attempt or ? |
What happened?
InPlacePodVerticalScaling does not meet the requirement of qosClass being equal to Guaranteed after shrinking the memory
What did you expect to happen?
InPlacePodVerticalScaling maintains the same qosclass type of Pod before and after scaling
How can we reproduce it (as minimally and precisely as possible)?
After enabling the InPlacePodVerticalScaling feature, the patch modifies the request and limit of the container's resource to a value smaller than usage
Anything else we need to know?
No response
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.2", GitCommit:"4b8e819355d791d96b7e9d9efe4cbafae2311c88", GitTreeState:"clean", BuildDate:"2024-02-14T22:24:00Z", GoVersion:"go1.21.7", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: