-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA recommends 100T for memory and 100G for CPU #5569
Comments
Hey @yogeek, thanks for raising this issue! To me, this looks like a UX problem with goldilocks, not so much an issue with VPA itself. Let me try to elaborate why: The VPA recommendation is what's presented in the The The statistical model used in the VPA What goldilocks shows you here are PS: The goldilocks-controller is deployed with Does this make sense? /remove-kind bug |
Thanks for the detailed info here. I actually learned a thing or two about the VPC which is rather helpful. Just wanted to mention that this PR changes the behavior of the equals sign in Goldilocks. We also have some planned changes to improve usability. To me it seems the original issue here is related to your statement:
I have definitely seen behavior in the VPA that the recommended values are extremely large on initial recommendation when first setting up a demo. I don't think that's a bug, just something to be aware of about initial recommendations. I will have to keep in mind the 8-days recommendation when talking about this in the future. TL;DR - Thank you for the details, and I 100% agree with your statements :-D |
@voelzmo thanks for the details explanation 👍 Indeed, your assumption is correct : this situation happened very shortly after deploying my workload I have notified goldilocks team about your feedback in the slack channel I guess that this issue can be closed and that maybe a warning could be added in the goldilocks documentation for users to be aware of this behavior about initial recommendations |
Slack discussion : https://fairwindscommunity.slack.com/archives/CV0AU2CTS/p1678110018503029 Github related issue in VPA : kubernetes/autoscaler#5569
Hey @sudermanjr and @yogeek, Great to see this has been useful for you! One more aside, if you want to go into the nitty-gritty details of the VPA algorithm and how the boundaries evolve with more data, we can take a look at the code documentation for that part:
A caveat: when I mentioned the 8 days above, I was referring to the memory histogram's |
…U) (#593) Slack discussion : https://fairwindscommunity.slack.com/archives/CV0AU2CTS/p1678110018503029 Github related issue in VPA : kubernetes/autoscaler#5569
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
k8s.gcr.io/autoscaling/vpa-recommender:0.11.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
VPA deployed as a subchart of goldilocks in a K8S clusters in AWS (not an EKS, deployed with kubeadm on EC2 instances)
The issue seems to appear if prometheus is used for recommender history via the extraArgs field (cf. values.yaml below)
What did you expect to happen?:
I would expect coherent values in recommendations
What happened instead?:
Incoherent values are shown in recommendations like
100T
for memory limite or100G
for CPU limitHow to reproduce it (as minimally and precisely as possible):
Deploy goldilocks with VPA with the commands below :
Additionnal information
I found a similar issue here but no more informatin : https://bugzilla.redhat.com/show_bug.cgi?id=1935794
The text was updated successfully, but these errors were encountered: