You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
Component version: 1.0.0
What k8s version are you using (kubectl version)?:
kubectl version Output
$ kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
What environment is this in?:
GCP, kind, probably All
What did you expect to happen?:
When changing the name of a container in a deployment managed by a VPA, I expected the VPA recommender to react as though the container had been replaced, rather than a new container having been added, e.g.:
The VPA reacts as though a container has been added to the deployment, splits the resources accordingly, keeps everything (aggregates, checkpoints, recommendations) for the old container, and never cleans them up:
And the default lowerBound CPU of cpu: 25mhas been spread across the containers, with each of them getting cpu: 12m, even though only one of them (sleeper-b) actually exists anymore.
Anything else we need to know?:
If you change the container name again, you get a 3rd one, a 4th one, etc and the resources get smaller.
The old container's stuff ( recommendations, aggregates, checkpoints) doesn't seem to ever get cleaned up, and the recommender keeps maintaining the checkpoints (if you delete one, it comes back)
I suspect this is due to a confluence of issues around our handling of containers:
Aggregates don't get pruned when containers are removed from a VPA's pods if the VPA and targetRef are otherwise intact
Len()over the aggregates during a rollout in which a container name has changed or removed will be wrong (there will be legitimately 2 containers, but each pod will only have one, the resources should not be split)
Checkpoints don't get pruned when containers are removed from a VPA's pods if the VPA and targetRef are otherwise intact, and they can get loaded back in if the recommender restarts
Workaround (to prevent "too small resources after split", at least) could be a minimum resource policy, e.g.:
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
Component version: 1.0.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
GCP, kind, probably All
What did you expect to happen?:
When changing the name of a container in a deployment managed by a VPA, I expected the VPA recommender to react as though the container had been replaced, rather than a new container having been added, e.g.:
and then after the rename (from
sleeper-a
tosleeper-b
) I'd expect something like:What happened instead?:
The VPA reacts as though a container has been added to the deployment, splits the resources accordingly, keeps everything (aggregates, checkpoints, recommendations) for the old container, and never cleans them up:
This is especially problematic for containers whose resources drop low enough as a result of this split that they start failing health checks.
How to reproduce it (as minimally and precisely as possible):
This is just a deployment that sleeps and does nothing so it gets the default minimum resources.
Deployment + VPA:
After applying, wait for it to get a recommendation:
Then change the name of the container (here I changed
sleeper-a
tosleeper-b
):Watch it roll out:
It eventually finishes:
The VPA still thinks we have two containers:
And the default
lowerBound
CPU ofcpu: 25m
has been spread across the containers, with each of them gettingcpu: 12m
, even though only one of them (sleeper-b
) actually exists anymore.Anything else we need to know?:
targetRef
are otherwise intactLen()
over the aggregates during a rollout in which a container name has changed or removed will be wrong (there will be legitimately 2 containers, but each pod will only have one, the resources should not be split)targetRef
are otherwise intact, and they can get loaded back in if the recommender restartsThe text was updated successfully, but these errors were encountered: