-
Notifications
You must be signed in to change notification settings - Fork 353
feat(env): Add automatic memory limit handling #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
cc @EItanya PTAL |
EItanya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I've been thinking about this PR and your previous one a little more, and I think they may be overkill tbh. I've used the following approach before which has been super simple and worked well.
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: "1"
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
divisor: "1"
Here it is in Istio: https://github.com/istio/istio/blob/b6df261b7c9f66c10b0d17b02f9c9b9ebb546033/manifests/charts/istio-control/istio-discovery/templates/deployment.yaml#L209-L218
|
thanks @EItanya This method is also effective. However, there is a drawback: under |
ab47997 to
95cd178
Compare
|
3539316 to
8d47371
Compare
Signed-off-by: dongjiang <dongjiang1989@126.com>
8d47371 to
d2b3a4c
Compare
I don't think thats true. a VPA change triggers a new pod, which will have a freshly computed value. I agree with @EItanya on the approach |
It might be that my description wasn't clear enough. When a VPA change with in place pod resize. |
|
you are referring to https://kubernetes.io/blog/2025/05/16/kubernetes-v1-33-in-place-pod-resize-beta/ yeah? Given we are only calling SetMemLimit, if the memory limit actually does change during the pod lifetime, this change won't handle it either...? |
add auto GOMEMLIMIT setting