Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bring vertical autoscaling feature into current autoscaler. #19

Closed
MrHohn opened this issue Jan 5, 2017 · 7 comments
Closed

Bring vertical autoscaling feature into current autoscaler. #19

MrHohn opened this issue Jan 5, 2017 · 7 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@MrHohn
Copy link
Member

MrHohn commented Jan 5, 2017

Although the original purpose of this repo is to provide a method for horizontal autoscaling, it would be great if we also bring in the vertical autoscaling feature, as this could be done in the same pattern. The vertical autoscaling feature could be modeled off of the Addon Resizer project, which is basically monitoring the cluster status and modify the scaling resource(like CPU and Memory resource in Deployment specs) as needed.

One thought is that we could implement vertical autoscaling controllers just like the horizontal ones. And we could also restructure the codes to enable running multiple controllers simultaneously (one scenario would be running one horizontal controller and one vertical controller together). kube-dns would be a real use case, as we may need to bump up its resource request when the cluster size grow big enough while we also want to horizontally scale it in the meantime.

Another thought is we may need to collect different infos as the cluster status(rather than only number of nodes and cores) when different types of controllers start to come in. #10 may be a feasible starting point.

cc @bowei

@MrHohn MrHohn added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. labels Jan 5, 2017
@Q-Lee
Copy link

Q-Lee commented Feb 1, 2017

There are at least 2 useful improvements that have been made to this horizontal autoscaler (#cores and a config file) that could be used for the vertical autoscaler. The code between the autoscaler and the rightsizer shouldn't need to be very different, and could live together.

@davidopp
Copy link

davidopp commented Feb 1, 2017

cc/ @mwielgus

@caseydavenport
Copy link

Another potential use-case for this is Calico, which currently uses some hard-coded scripting to ensure resources are set properly.

Calico would want to make use of both the horizontal and vertical scaling features simultaneously.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 21, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

sftim pushed a commit to sftim/cluster-proportional-autoscaler that referenced this issue Feb 27, 2023
closes kubernetes-sigs#19

Signed-off-by: Dylan Page <genpage@pagefortress.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants