-
Notifications
You must be signed in to change notification settings - Fork 777
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fleet autoscaler should not modify a Fleet's spec #3229
Comments
Update: If I understood correctly the Fleet API, it seems that the |
The scale subresource has been implemented on I'm not sure I quite understand your issue though. If you use the scale subresource, that is just an indirect way of changing the replicas count in the spec for the Further down in the HPA documentation that you linked it says:
So even if the HPA controller is changing the replicas field via the scale subresource, it still manages that field. The scale subresource is just a generic interface to the replicas field so that the HPA controller doesn't need to have code added for any new resource types that can be scaled up and down. Also, to your point about conflicting controllers, the HPA documentation says:
and the same is true for a Fleet. You can prevent your controller and the fleet autoscaler from fighting over the replicas field but not including replicas in your fleet specification, and allowing the fleet autoscaler to set the field on your behalf. |
'This issue is marked as Stale due to inactivity for more than 30 days. To avoid being marked as 'stale' please add 'awaiting-maintainer' label or add a comment. Thank you for your contributions ' |
What happened:
I created a
FleetAutoscaler
managing aFleet
with aBuffer
policy. The autoscaler behavior is to modify the spec of theFleet
with an updated replicas count. This is wrong because if another controller is in charge of managing Agones resources, this controller and Agones will fight and update consequently thereplicas
field because one will think the value is wrong and cycle with that.I consider this more a bug than a feature request because it should really not be done this way. But this ticket has a step in both categories.
What you expected to happen:
The expected behavior is something like Kubernetes built-in
Deployment
andHorizontalPodAutoscaler
where the spec is not modified, but the handling is rather internal. So I think you have three ways of handling this issue:From Kubernetes's documentation about HPAs (link):
How to reproduce it (as minimally and precisely as possible):
Create a dummy
Fleet
. Create aFleetAutoscaler
with dummy values.Anything else we need to know?:
Environment:
kubectl version
): 1.27The text was updated successfully, but these errors were encountered: