New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vertical Scaling of Pods #21

Open
MikeSpreitzer opened this Issue Jul 12, 2016 · 64 comments

Comments

Projects
None yet
@MikeSpreitzer

MikeSpreitzer commented Jul 12, 2016

Description

Make it possible to vary the resource limits on a pod over its lifetime. In particular, this is valuable for pets (i.e., pods that are very costly to destroy and re-create).

This was discussed in the Node SIG meeting of 12 July 2016, where it was noted that this is a big cross-cutting issue and that @ncdc might be an appropriate owner.

Progress Tracker

  • Before Alpha
    • Design Approval
      • Design Proposal. This goes under docs/proposals. Doing a proposal as a PR allows line-by-line commenting from community, and creates the basis for later design documentation. Paste link to merged design proposal here: kubernetes/community#338
      • Initial API review (if API). Maybe same PR as design doc. kubernetes/community#338
        • Any code that changes an API (/pkg/apis/...)
        • cc @kubernetes/api
    • Write (code + tests + docs) then get them merged. ALL-PR-NUMBERS
      • Code needs to be disabled by default. Verified by code OWNERS
      • Minimal testing
      • Minimal docs
        • cc @kubernetes/docs on docs PR
        • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
        • New apis: Glossary Section Item in the docs repo: kubernetes/kubernetes.github.io
      • Update release notes
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jul 12, 2016

Member

@MikeSpreitzer Did you reach any kind of consensus within Node SIG about how to solve this issue? Do you have a group of people who are ready to start coding up that agreed-upon thing? If not, it might be a bit early to open an issue.

Member

erictune commented Jul 12, 2016

@MikeSpreitzer Did you reach any kind of consensus within Node SIG about how to solve this issue? Do you have a group of people who are ready to start coding up that agreed-upon thing? If not, it might be a bit early to open an issue.

@MikeSpreitzer

This comment has been minimized.

Show comment
Hide comment
@MikeSpreitzer

MikeSpreitzer Jul 12, 2016

I am new here and was told this has been a long-standing desire, some work has already been accomplished, and some planning has already been done. I asked how to pull all the existing thinking together and organize to get onto a path to making it happen, and was told to start here.

MikeSpreitzer commented Jul 12, 2016

I am new here and was told this has been a long-standing desire, some work has already been accomplished, and some planning has already been done. I asked how to pull all the existing thinking together and organize to get onto a path to making it happen, and was told to start here.

@davidopp

This comment has been minimized.

Show comment
Hide comment
Member

davidopp commented Jul 12, 2016

@timothysc

This comment has been minimized.

Show comment
Hide comment
@timothysc
Member

timothysc commented Jul 13, 2016

@idvoretskyi idvoretskyi modified the milestone: v1.4 Jul 18, 2016

@idvoretskyi idvoretskyi added the sig/node label Aug 4, 2016

@alex-mohr alex-mohr modified the milestones: v1.5, v1.4 Aug 17, 2016

@alex-mohr

This comment has been minimized.

Show comment
Hide comment
@alex-mohr

alex-mohr Aug 17, 2016

Member

Doesn't appear to have had any traction in 1.4, so pushing to 1.5 -- chime in if that's incorrect.

Member

alex-mohr commented Aug 17, 2016

Doesn't appear to have had any traction in 1.4, so pushing to 1.5 -- chime in if that's incorrect.

@ConnorDoyle

This comment has been minimized.

Show comment
Hide comment
@ConnorDoyle

ConnorDoyle Aug 24, 2016

Member

Hi @MikeSpreitzer, we (Intel) would like to help out with this in the 1.5 timeline. Can we start by listing the goals/requirements as currently understood, maybe in a shared doc?

This feature is quite large. Previous discussions suggest breaking down into phases.

It seems like some dependencies can be broken off and parallelized, for example enabling in-place update for compressible resources.

Member

ConnorDoyle commented Aug 24, 2016

Hi @MikeSpreitzer, we (Intel) would like to help out with this in the 1.5 timeline. Can we start by listing the goals/requirements as currently understood, maybe in a shared doc?

This feature is quite large. Previous discussions suggest breaking down into phases.

It seems like some dependencies can be broken off and parallelized, for example enabling in-place update for compressible resources.

@MikeSpreitzer

This comment has been minimized.

Show comment
Hide comment
@MikeSpreitzer

MikeSpreitzer Aug 24, 2016

Clearly this is not going to land in 1.4. Yes, let's start by breaking this big thing down into phases and pieces. Would someone with more background on this like to take a whack at that?

MikeSpreitzer commented Aug 24, 2016

Clearly this is not going to land in 1.4. Yes, let's start by breaking this big thing down into phases and pieces. Would someone with more background on this like to take a whack at that?

@davidopp

This comment has been minimized.

Show comment
Hide comment
@davidopp

davidopp Aug 24, 2016

Member

A design doc would be a good start, but even before that, we need some open-ended discussion to discuss what the goal and requirements are. Maybe we should discuss in kubernetes/kubernetes#10782 ? That discussion is more than a year old but I'd like to avoid opening another issue (and the issues repo is definitely not the right place for design discussions).

Member

davidopp commented Aug 24, 2016

A design doc would be a good start, but even before that, we need some open-ended discussion to discuss what the goal and requirements are. Maybe we should discuss in kubernetes/kubernetes#10782 ? That discussion is more than a year old but I'd like to avoid opening another issue (and the issues repo is definitely not the right place for design discussions).

@fgrzadkowski

This comment has been minimized.

Show comment
Hide comment
Member

fgrzadkowski commented Sep 12, 2016

@idvoretskyi

This comment has been minimized.

Show comment
Hide comment
@idvoretskyi

idvoretskyi Oct 13, 2016

Member

@MikeSpreitzer @kubernetes/sig-node can you clarify the actual status of the feature?

Member

idvoretskyi commented Oct 13, 2016

@MikeSpreitzer @kubernetes/sig-node can you clarify the actual status of the feature?

@fgrzadkowski

This comment has been minimized.

Show comment
Hide comment
@fgrzadkowski

fgrzadkowski Oct 13, 2016

Member

@kubernetes/autoscaling

Member

fgrzadkowski commented Oct 13, 2016

@kubernetes/autoscaling

@fgrzadkowski

This comment has been minimized.

Show comment
Hide comment
@fgrzadkowski

fgrzadkowski Oct 13, 2016

Member

Btw, I think this feature should be discussed and sponsored by sig-autoscaling, not sig-node. Obviously there are number of features/changes on the node level to make this work correctly, but I strongly believe we should keep it within aforementioned sig. Any thoughts around that?

Member

fgrzadkowski commented Oct 13, 2016

Btw, I think this feature should be discussed and sponsored by sig-autoscaling, not sig-node. Obviously there are number of features/changes on the node level to make this work correctly, but I strongly believe we should keep it within aforementioned sig. Any thoughts around that?

@idvoretskyi

This comment has been minimized.

Show comment
Hide comment
@idvoretskyi

idvoretskyi Oct 13, 2016

Member

@fgrzadkowski if you as an SIG-Autoscaling lead would like to sponsor the feature, I have no objections.

Member

idvoretskyi commented Oct 13, 2016

@fgrzadkowski if you as an SIG-Autoscaling lead would like to sponsor the feature, I have no objections.

@idvoretskyi

This comment has been minimized.

Show comment
Hide comment
@idvoretskyi

idvoretskyi Oct 13, 2016

Member

@kubernetes/sig-node @dchen1107 are you going to cooperate with @kubernetes/autoscaling on this feature work or you'd prefer the SIG Autoscaling only to work on?

Member

idvoretskyi commented Oct 13, 2016

@kubernetes/sig-node @dchen1107 are you going to cooperate with @kubernetes/autoscaling on this feature work or you'd prefer the SIG Autoscaling only to work on?

@idvoretskyi

This comment has been minimized.

Show comment
Hide comment
@idvoretskyi

idvoretskyi Nov 16, 2016

Member

@MikeSpreitzer @kubernetes/autoscaling any updates on this feature?

Member

idvoretskyi commented Nov 16, 2016

@MikeSpreitzer @kubernetes/autoscaling any updates on this feature?

@DirectXMan12

This comment has been minimized.

Show comment
Hide comment
@DirectXMan12

DirectXMan12 Nov 16, 2016

Contributor

cc @derekwaynecarr

From the autoscaling side, we're blocked on the node changes. From the node side, I think that needs an interface in the CRI to vary resource limits, which we might not see for a while.

Contributor

DirectXMan12 commented Nov 16, 2016

cc @derekwaynecarr

From the autoscaling side, we're blocked on the node changes. From the node side, I think that needs an interface in the CRI to vary resource limits, which we might not see for a while.

@idvoretskyi

This comment has been minimized.

Show comment
Hide comment
@idvoretskyi

idvoretskyi Nov 17, 2016

Member

@MikeSpreitzer Does this feature target alpha for 1.5?

Member

idvoretskyi commented Nov 17, 2016

@MikeSpreitzer Does this feature target alpha for 1.5?

@fgrzadkowski

This comment has been minimized.

Show comment
Hide comment
@fgrzadkowski

fgrzadkowski Nov 17, 2016

Member

This feature will not land in 1.5. Removing milestone.

We will be working on this feature for 1.6. Reassigning to folks who are already working on a design and will pursue implementation in Q1.

Member

fgrzadkowski commented Nov 17, 2016

This feature will not land in 1.5. Removing milestone.

We will be working on this feature for 1.6. Reassigning to folks who are already working on a design and will pursue implementation in Q1.

@fgrzadkowski fgrzadkowski modified the milestones: next-milestone, v1.5 Nov 17, 2016

@davidopp

This comment has been minimized.

Show comment
Hide comment
@davidopp

davidopp Nov 17, 2016

Member

IIUC pre-requisites for at least part of this are

  1. historical data from Infrastore
  2. kubelet in-place resource update

I'm not sure what your plan is for (1) but from my last chat with Dawn (2) wouldn't be feasible to begin implementing before Q2. (It's not a trivial feature.)

cc/ @dchen1107

Member

davidopp commented Nov 17, 2016

IIUC pre-requisites for at least part of this are

  1. historical data from Infrastore
  2. kubelet in-place resource update

I'm not sure what your plan is for (1) but from my last chat with Dawn (2) wouldn't be feasible to begin implementing before Q2. (It's not a trivial feature.)

cc/ @dchen1107

@fgrzadkowski

This comment has been minimized.

Show comment
Hide comment
@fgrzadkowski

fgrzadkowski Nov 17, 2016

Member

As explained in kubernetes/kubernetes#10782 (comment):

  • We don't need in-place update for MVP of vertical pod autoscaler. We can just be more conservative and recreate pods via deployments
  • Infrastore would be useful, but for MVP we can just aggregate this data in VPA controller if we don't have infrastore before that time or we can read this information from a monitoring pipeline
Member

fgrzadkowski commented Nov 17, 2016

As explained in kubernetes/kubernetes#10782 (comment):

  • We don't need in-place update for MVP of vertical pod autoscaler. We can just be more conservative and recreate pods via deployments
  • Infrastore would be useful, but for MVP we can just aggregate this data in VPA controller if we don't have infrastore before that time or we can read this information from a monitoring pipeline

@mwielgus mwielgus modified the milestones: next-milestone, v1.11 Jun 6, 2018

@liggitt liggitt modified the milestones: v1.11, next-milestone Jun 7, 2018

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jun 7, 2018

Member

moving to 1.12 as discussed with @mwielgus and @jberkus

Member

liggitt commented Jun 7, 2018

moving to 1.12 as discussed with @mwielgus and @jberkus

@justaugustus justaugustus modified the milestones: next-milestone, v1.12 Jul 2, 2018

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Jul 18, 2018

Member

@mwielgus @kgrygiel --

It looks like this feature is currently in the Kubernetes 1.12 Milestone.

If that is still accurate, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y)
    • Beta release target (x.y)
    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

Please note that the Features Freeze is July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

/cc @justaugustus @kacole2 @robertsandoval @rajendar38

Member

justaugustus commented Jul 18, 2018

@mwielgus @kgrygiel --

It looks like this feature is currently in the Kubernetes 1.12 Milestone.

If that is still accurate, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y)
    • Beta release target (x.y)
    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

Please note that the Features Freeze is July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

/cc @justaugustus @kacole2 @robertsandoval @rajendar38

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Jul 31, 2018

Member

@mwielgus @kgrygiel @liggitt --
Feature Freeze is today. Are we planning on graduating this feature in Kubernetes 1.12?
If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?

Member

justaugustus commented Jul 31, 2018

@mwielgus @kgrygiel @liggitt --
Feature Freeze is today. Are we planning on graduating this feature in Kubernetes 1.12?
If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?

@bskiba

This comment has been minimized.

Show comment
Hide comment
@bskiba

bskiba Jul 31, 2018

Member

@justaugustus
Reiterating on what @mwielgus said, this feature is more of an addon rather than a core Kubernetes feature. It is released independently. After consultation with sig-architecture we decided to go with keeping the API outside of Kubernetes as a CRD.

All the development is being done in the Kubernetes autoscaler repo: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler
We don't expect any development needed in the kubereneres/kubernetes repo.

We are planning to graduate to beta within 1.12 timeline.

Member

bskiba commented Jul 31, 2018

@justaugustus
Reiterating on what @mwielgus said, this feature is more of an addon rather than a core Kubernetes feature. It is released independently. After consultation with sig-architecture we decided to go with keeping the API outside of Kubernetes as a CRD.

All the development is being done in the Kubernetes autoscaler repo: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler
We don't expect any development needed in the kubereneres/kubernetes repo.

We are planning to graduate to beta within 1.12 timeline.

@karthickrajamani

This comment has been minimized.

Show comment
Hide comment
@karthickrajamani

karthickrajamani Jul 31, 2018

@justaugustus on a separate note there is a proposal in play for live, in-place vertical scaling pertinent to this same issue #21 which does require changes to Kubernetes. We have a PR for the proposal waiting to be merged (kubernetes/community#1719) and a working implementation of it. Because the PR for proposal has not yet been merged, I assume your query is not about the live, in-place vertical scaling proposal but the vertical-pod-autoscalar (?) We do want to see our proposal evaluated and hopefully accepted as early as possible. Let us know what we should do to make that so. Thanks.

karthickrajamani commented Jul 31, 2018

@justaugustus on a separate note there is a proposal in play for live, in-place vertical scaling pertinent to this same issue #21 which does require changes to Kubernetes. We have a PR for the proposal waiting to be merged (kubernetes/community#1719) and a working implementation of it. Because the PR for proposal has not yet been merged, I assume your query is not about the live, in-place vertical scaling proposal but the vertical-pod-autoscalar (?) We do want to see our proposal evaluated and hopefully accepted as early as possible. Let us know what we should do to make that so. Thanks.

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Aug 4, 2018

Member

I've added this to the 1.12 sheet.

@karthickrajamani -- please open a separate Features tracking issue for kubernetes/community#1719 when you have a chance.

/remove-stage alpha
cc: @kacole2 @wadadli @robertsandoval @rajendar38

Member

justaugustus commented Aug 4, 2018

I've added this to the 1.12 sheet.

@karthickrajamani -- please open a separate Features tracking issue for kubernetes/community#1719 when you have a chance.

/remove-stage alpha
cc: @kacole2 @wadadli @robertsandoval @rajendar38

@zparnold

This comment has been minimized.

Show comment
Hide comment
@zparnold

zparnold Aug 20, 2018

Member

Hey there! @MikeSpreitzer I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?

Member

zparnold commented Aug 20, 2018

Hey there! @MikeSpreitzer I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?

@jimangel

This comment has been minimized.

Show comment
Hide comment
@jimangel

jimangel Aug 27, 2018

Member

@mwielgus @kgrygiel Bump for docs ☝️

Member

jimangel commented Aug 27, 2018

@mwielgus @kgrygiel Bump for docs ☝️

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Sep 5, 2018

Member

@mwielgus @kgrygiel --
Any update on docs status for this feature? Are we still planning to land it for 1.12?
At this point, code freeze is upon us, and docs are due on 9/7 (2 days).
If we don't here anything back regarding this feature ASAP, we'll need to remove it from the milestone.

cc: @zparnold @jimangel @tfogo

Member

justaugustus commented Sep 5, 2018

@mwielgus @kgrygiel --
Any update on docs status for this feature? Are we still planning to land it for 1.12?
At this point, code freeze is upon us, and docs are due on 9/7 (2 days).
If we don't here anything back regarding this feature ASAP, we'll need to remove it from the milestone.

cc: @zparnold @jimangel @tfogo

@mwielgus

This comment has been minimized.

Show comment
Hide comment
@mwielgus

mwielgus Sep 7, 2018

Contributor

This is landing around 1.12 however it is a launch of an independent addon. It is not included in 1.12 Kubernetes release. Sig-Architecture, at the beginning of this cycle, decided to keep the VPA api as CRD and thus not bind it to any particular K8S release.

Contributor

mwielgus commented Sep 7, 2018

This is landing around 1.12 however it is a launch of an independent addon. It is not included in 1.12 Kubernetes release. Sig-Architecture, at the beginning of this cycle, decided to keep the VPA api as CRD and thus not bind it to any particular K8S release.

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Sep 7, 2018

Member

Thanks for the update!

Member

justaugustus commented Sep 7, 2018

Thanks for the update!

@karthickrajamani

This comment has been minimized.

Show comment
Hide comment
@karthickrajamani

karthickrajamani Sep 7, 2018

@justaugustus, so can we continue to use this issue for tracking live, in-place vertical scaling ( kubernetes/community#1719) which is what it was created for originally by Mike, given that VPA is not bound to a particular K8S release?

karthickrajamani commented Sep 7, 2018

@justaugustus, so can we continue to use this issue for tracking live, in-place vertical scaling ( kubernetes/community#1719) which is what it was created for originally by Mike, given that VPA is not bound to a particular K8S release?

@justaugustus

This comment has been minimized.

Show comment
Hide comment
@justaugustus

justaugustus Sep 7, 2018

Member

@karthickrajamani -- yep, it's fine to keep tracking here.

Member

justaugustus commented Sep 7, 2018

@karthickrajamani -- yep, it's fine to keep tracking here.

@zparnold

This comment has been minimized.

Show comment
Hide comment
@zparnold

zparnold Sep 12, 2018

Member

@mwielgus Are we going to have some documentation for this feature before the release date of 1.12? Since it's independent, I'm willing to let it not be counted in this release as long as it does get to have some attention before it's officially released as an add-on. Does that sound good?

Member

zparnold commented Sep 12, 2018

@mwielgus Are we going to have some documentation for this feature before the release date of 1.12? Since it's independent, I'm willing to let it not be counted in this release as long as it does get to have some attention before it's officially released as an add-on. Does that sound good?

@mwielgus

This comment has been minimized.

Show comment
Hide comment
@mwielgus

mwielgus Sep 12, 2018

Contributor

@zparnold There will be no extra documentation to include in 1.12.

Contributor

mwielgus commented Sep 12, 2018

@zparnold There will be no extra documentation to include in 1.12.

@zparnold

This comment has been minimized.

Show comment
Hide comment
@zparnold

zparnold Sep 12, 2018

Member
Member

zparnold commented Sep 12, 2018

@claurence

This comment has been minimized.

Show comment
Hide comment
@claurence

claurence Oct 5, 2018

Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to alpha/beta/stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13?

We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP

claurence commented Oct 5, 2018

Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to alpha/beta/stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13?

We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP

@kacole2 kacole2 added tracked/no and removed tracked/yes labels Oct 8, 2018

@kacole2

This comment has been minimized.

Show comment
Hide comment
@kacole2

kacole2 Oct 8, 2018

Contributor

Hi. Following up from @claurence.
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:

  • Docs (open placeholder PRs): 11/8
  • Code Slush: 11/9
  • Code Freeze Begins: 11/15
  • Docs Complete and Reviewed: 11/27

Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet

Thanks!

Contributor

kacole2 commented Oct 8, 2018

Hi. Following up from @claurence.
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:

  • Docs (open placeholder PRs): 11/8
  • Code Slush: 11/9
  • Code Freeze Begins: 11/15
  • Docs Complete and Reviewed: 11/27

Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet

Thanks!

@mwielgus mwielgus removed this from the v1.12 milestone Oct 11, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment