New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vertical Scaling of Pods #21
Comments
|
@MikeSpreitzer Did you reach any kind of consensus within Node SIG about how to solve this issue? Do you have a group of people who are ready to start coding up that agreed-upon thing? If not, it might be a bit early to open an issue. |
|
I am new here and was told this has been a long-standing desire, some work has already been accomplished, and some planning has already been done. I asked how to pull all the existing thinking together and organize to get onto a path to making it happen, and was told to start here. |
|
Intel has made a proposal along these lines, see kubernetes/kubernetes#28316 Also see kubernetes/kubernetes#10782 cc/ @balajismaniam @ConnorDoyle @fgrzadkowski @mwielgus @piosz @jszczepkowski @gmarek @wojtek-t |
|
Doesn't appear to have had any traction in 1.4, so pushing to 1.5 -- chime in if that's incorrect. |
|
Hi @MikeSpreitzer, we (Intel) would like to help out with this in the 1.5 timeline. Can we start by listing the goals/requirements as currently understood, maybe in a shared doc? This feature is quite large. Previous discussions suggest breaking down into phases. It seems like some dependencies can be broken off and parallelized, for example enabling in-place update for compressible resources. |
|
Clearly this is not going to land in 1.4. Yes, let's start by breaking this big thing down into phases and pieces. Would someone with more background on this like to take a whack at that? |
|
A design doc would be a good start, but even before that, we need some open-ended discussion to discuss what the goal and requirements are. Maybe we should discuss in kubernetes/kubernetes#10782 ? That discussion is more than a year old but I'd like to avoid opening another issue (and the issues repo is definitely not the right place for design discussions). |
|
@MikeSpreitzer @kubernetes/sig-node can you clarify the actual status of the feature? |
|
@kubernetes/autoscaling |
|
Btw, I think this feature should be discussed and sponsored by sig-autoscaling, not sig-node. Obviously there are number of features/changes on the node level to make this work correctly, but I strongly believe we should keep it within aforementioned sig. Any thoughts around that? |
|
@fgrzadkowski if you as an SIG-Autoscaling lead would like to sponsor the feature, I have no objections. |
|
@kubernetes/sig-node @dchen1107 are you going to cooperate with @kubernetes/autoscaling on this feature work or you'd prefer the SIG Autoscaling only to work on? |
|
@MikeSpreitzer @kubernetes/autoscaling any updates on this feature? |
|
From the autoscaling side, we're blocked on the node changes. From the node side, I think that needs an interface in the CRI to vary resource limits, which we might not see for a while. |
|
@MikeSpreitzer Does this feature target alpha for 1.5? |
|
This feature will not land in 1.5. Removing milestone. We will be working on this feature for 1.6. Reassigning to folks who are already working on a design and will pursue implementation in Q1. |
|
IIUC pre-requisites for at least part of this are
I'm not sure what your plan is for (1) but from my last chat with Dawn (2) wouldn't be feasible to begin implementing before Q2. (It's not a trivial feature.) cc/ @dchen1107 |
|
As explained in kubernetes/kubernetes#10782 (comment):
|
|
Hey there! @MikeSpreitzer I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it? |
|
This is landing around 1.12 however it is a launch of an independent addon. It is not included in 1.12 Kubernetes release. Sig-Architecture, at the beginning of this cycle, decided to keep the VPA api as CRD and thus not bind it to any particular K8S release. |
|
Thanks for the update! |
|
@justaugustus, so can we continue to use this issue for tracking live, in-place vertical scaling ( kubernetes/community#1719) which is what it was created for originally by Mike, given that VPA is not bound to a particular K8S release? |
|
@karthickrajamani -- yep, it's fine to keep tracking here. |
|
@mwielgus Are we going to have some documentation for this feature before the release date of 1.12? Since it's independent, I'm willing to let it not be counted in this release as long as it does get to have some attention before it's officially released as an add-on. Does that sound good? |
|
@zparnold There will be no extra documentation to include in 1.12. |
|
That's what I like to hear! :)
…On Wed, Sep 12, 2018 at 11:08 AM Marcin Wielgus ***@***.***> wrote:
@zparnold <https://github.com/zparnold> There will be no extra
documentation to include in 1.12.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#21 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE81SOE-Z0sD-_FPvWzbeZizoEFt0e0Nks5uaU2CgaJpZM4JKtoW>
.
|
|
Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to alpha/beta/stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13? We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP |
|
Hi. Following up from @claurence.
Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet Thanks! |
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
|
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
|
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
add design for TLS config
Description
Make it possible to vary the resource limits on a pod over its lifetime. In particular, this is valuable for pets (i.e., pods that are very costly to destroy and re-create).
This was discussed in the Node SIG meeting of 12 July 2016, where it was noted that this is a big cross-cutting issue and that @ncdc might be an appropriate owner.
Progress Tracker
/pkg/apis/...)FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT
More advice:
Design
Coding
and sometimes http://github.com/kubernetes/contrib, or other repos.
check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
testing. They won't do detailed code review: that already happened when your PRs were reviewed.
When that is done, you can check this box and the reviewer will apply the "code-complete" label.
Docs
The text was updated successfully, but these errors were encountered: