-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow horizontally scaling statefulsets #29
Comments
I assume this would require a newer version of the client library too |
What would you be scaling based on? |
Would it be safe to just remove some instances? Would it be safe to add new ones? |
We'd likely be scaling on the number of nodes, but perhaps if the #19 is completed, then also size of node. I see it that you have to know that you want to scale a stateful set in advance, so it's up to you to know it is safe to do so and whether a min/max setting is required. I haven't looked too deep at the code yet to see if would handle not being able to scale down due to a node being unavailable. |
What would be the main use-case for this? |
I said in the initial post :) We'd like to have a common set of kubernetes manifest files that set up a base cluster and any necessary services for dev, test and prod. Currently we struggle to run this on very small clusters/minikube because on larger clusters we want more instances of prometheus/etcd/docker registry mirrors/etc. |
Right, must have missed that. Sorry :). |
@stuart-warren The autoscaler only takes available nodes into account. It is totally true that it's up to the users to know whether it is safe to scale a statefulset. Though I'm not sure how to make our generic controllers be application-aware? |
Issue related in kubernetes/kubernetes: kubernetes/kubernetes#44033 |
@gyliu513 For most stateful applications (zookeeper, mysql, etc), the scale needs deliberate thought and is likely not be something we want to vary with the size of the cluster. @stuart-warren, for the docker registry mirror pods, why does that use-case require a StatefulSet? /cc @kow3ns |
Hi folks, reading the docs, node autoscaling says it won't scale down a node if there is a pod not backed by a replicationcontroller. Similarly, looking at the API docs, looks like horizontal pod autoscaler supports only pods backed by replicationcontroller now. So - a tantential question, are there any plans to back StatefulSet with a replicationcontroller? Because now I don't see any RS behind SS. Because having the pod ID and ordering guarantees are really nice (reduce metrics explosion, better kubectl dev UX, etc) while still having same functionality of Deployments would be cool 👍 |
@foxish technically yes this app doesn't need to be a statefulset, but we'd still like to be able to control the size of a zookeeper/cassandra cluster depending on the number of nodes in a cluster. Ideally we'd have "everything" run in minikube with reduced resource requests and single instances and in a massive production cluster with many instances and increased resource requests. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi, I would also like to see this functionality implemented. Looking at the source code the change seems super simple and straight forward (but I may be terribly wrong). The use case is related to the unique ability of a statefulset to use Thanks! |
/remove-lifecycle rotten |
/reopen |
@salavessa: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
A use case for this feature. Kube state metrics has automated horizontal sharding (https://github.com/kubernetes/kube-state-metrics#horizontal-scaling-sharding) which is based on statefulset without PVC. A good metric to scale the statefulset is latency recommended by the community. Given it requires HPA with custom metrics (requires some extra work to adjust it in our architecture). Another approximate way can be to scale statefulset based on number of nodes (I may be wrong in this scenario). What I would like to say, statefulset PV's can be left on the users to handle, some statefulset may not use PV's (ex this case), so it may be safe to use this feature in those cases. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen This is a really obvious feature to add IMO... Would really like to see it worked on. |
@diranged: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi,
We'd like to add the ability to proportionally scale stateful sets. Is there a particular reason this is a really bad idea?
Our use-case is for docker registry mirror pods and prometheus instances where we use a shared git repo of kubernetes manifests. On minikube/tiny clusters, we run out of resources that we would have in production.
Might need to be a little more careful, but should be doable https://kubernetes.io/docs/tasks/run-application/scale-stateful-set
The text was updated successfully, but these errors were encountered: