-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing status subresource when using custom VPA recommender with GKE native VPA setup #6828
Comments
I understand well that the community here is not responsible for GKE's implementation. I am asking for either guidance, |
I think the VPA CRD deployed by GKE is just an older version |
Right, the |
For posterity, I have confirmed that any upgrade of the GKE cluster revert the CRD to its original form. |
I confirm that I could get a working custom recommender using GKE's VPA, doing the following:
I am going to close this issue as I don't see anything that can be done on VPA project side. |
@FrancoisPoinsot I know this is closed already, but just as an FYI: I experienced this same issue on AKS (both 1.27 and 1.29). The fix you proposed to use the 0.14.0 image rather than 1.0.0 fixed it for me too. |
I am currently talking with GCP to see if this CRD deployed there can be upgraded. Fairly sure if it ends up happening this will not be only for my clusters, but everyone. I hadn't faced that issue in Azure, because I am not relying on the native VPA feature there. Also: the credit for the fix goes to @voelzmo |
For GKE, here is the public issue tracker that got created as a result: https://issuetracker.google.com/issues/345166946 |
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
Component version: 1.1.1
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
GKE
What did you expect to happen?:
I expected I could deploy an instance of a custom recommender and it would interfaces nicely with everything else that GKE deploys natively for the VPA.
What happened instead?:
When deploying a custom recommender in GKE with GKE's VPA enabled, the custom recommender failed when attempting to update the
status
subresource:How to reproduce it (as minimally and precisely as possible):
--recommender-name
, with it's service account and permissions. I used cowboysysops's helm chart as base.Anything else we need to know?:
I figured there is a very small difference between the VerticalPodAutoscaler CRD that GKE deploys and the one available in this repo.
GKE's:
spec.subresources: {}
vs:
spec.subresources.status: {}
And indeed editing the CRD to add
status
field insubresources
solves the problem.But here is the issue. I wanted to:
Editing the CRD deployed by GKE sounds unreliable to me, as there is a risk it will be reverted later.
Am I missing some simpler way to deploy a custom recommender in GKE?
Or is there a more reliable way to update the CRD that would have no risk to be reverted?
It doesn't seem obvious to me why this CRD change cause this issue though.
Because using GKE's VPA, there will be a
status
eventually set in each VPA objects. So it looks likestatus
declaration in the subresource shouldn't be mandatoryThe text was updated successfully, but these errors were encountered: