New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support pod resource updates #5774

Open
hurf opened this Issue Mar 23, 2015 · 36 comments

Comments

Projects
None yet
@hurf
Copy link
Contributor

hurf commented Mar 23, 2015

Following the doc https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl-update.md,

I tried the command :

$ kubectl update pods my-pod --patch='{ "apiVersion": "v1beta1", "desiredState": { "manifest": [{ "cpu": 100 }]}}'

It says:

some fields are immutable

Is it a bug or the doc provided a bad example?

@roberthbailey

This comment has been minimized.

Copy link
Member

roberthbailey commented Mar 23, 2015

@bgrant0607 can you please take a look and assign appropriately for a fix (code or documentation)?

@hurf

This comment has been minimized.

Copy link
Contributor Author

hurf commented Mar 23, 2015

I volunteer myself to fix this once the 'which side' problem is comfirmed .

@bgrant0607

This comment has been minimized.

Copy link
Member

bgrant0607 commented Mar 23, 2015

We should allow resources to be updated.

It looks like they are included in container hash, so the container should be deleted and re-created with the new values.

@dchen1107 @vishh Can you think of anything else we need to do to permit resource updates other than just relax validation?

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/validation/validation.go#L697

@gmarek

This comment has been minimized.

Copy link
Member

gmarek commented Mar 23, 2015

@hurf: I'll assign myself to take a lock for you on this issue (only owners can be assigned). If for whatever reason you'll want to release it cc me in a comment.

@gmarek gmarek self-assigned this Mar 23, 2015

@vishh

This comment has been minimized.

Copy link
Member

vishh commented Mar 23, 2015

@bgrant0607: Nope. I think its safe to turn on pod resource updates.

@dchen1107

This comment has been minimized.

Copy link
Member

dchen1107 commented Mar 23, 2015

Since we all agreed on kubelet is the last line of defense on resource quota / mission control, I am ok on pod resource updates.;

@hurf

This comment has been minimized.

Copy link
Contributor Author

hurf commented Mar 24, 2015

@gmarek - Thanks, I'm working on it.

@gmarek

This comment has been minimized.

Copy link
Member

gmarek commented Apr 27, 2015

@hurf - what's the status of this?

@hurf

This comment has been minimized.

Copy link
Contributor Author

hurf commented Apr 27, 2015

@gmarek - My work on this has been interrupted by other stuff, but this week I have just gone back to this, I will finish it and submit a patch ASAP.

@gmarek

This comment has been minimized.

Copy link
Member

gmarek commented May 11, 2015

@hurf - ping?

@hurf

This comment has been minimized.

Copy link
Contributor Author

hurf commented May 11, 2015

@gmarek - alive, code finished and will submit soon. The change is loosen the validation as bgrant pointed out and changed a test.

@hurf

This comment has been minimized.

Copy link
Contributor Author

hurf commented May 13, 2015

@gmarek - Sorry for the delay for I'm also working on other issues. Please review the patch, if it's not working as what's in your mind, please tell me and I'll modify the patch.

@bgrant0607

This comment has been minimized.

Copy link
Member

bgrant0607 commented Aug 6, 2015

@davidopp

This comment has been minimized.

Copy link
Member

davidopp commented Oct 10, 2017

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 8, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@Novex

This comment has been minimized.

Copy link

Novex commented Feb 3, 2018

I'd also love to see this progress. What's the next step and is there anything I can do to help out?

@andyxning

This comment has been minimized.

Copy link
Member

andyxning commented Feb 12, 2018

/sub

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Mar 14, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Apr 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@kargakis

This comment has been minimized.

Copy link
Member

kargakis commented May 8, 2018

/remove-lifecycle rotten

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Aug 6, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@nikhita

This comment has been minimized.

Copy link
Member

nikhita commented Aug 10, 2018

/remove-lifecycle stale

@vinaykul

This comment has been minimized.

Copy link

vinaykul commented Sep 7, 2018

Hello,

Based on customer request, we have been looking into adding restart-free (in-place) vertical resources scaling feature to kubernetes for k8s Jobs. We reviewed VPA, looked at some of the earlier efforts (this thread for example), and worked on a proof-of-concept implementation that performs vertical resources scaling.

In our quick POC implementation, with minimally invasive changes we are able to update cpu/memory requests and limits for running pods by having the scheduler update its cache, and then have kubelet pick up the change and leverage UpdateContainerResources API for setting the updated limits without container restarts. We believe this approach fits well with the intrinsic flow of events from APIserver to scheduler to kubelet. Our next steps is to investigate extending VPA to leverage this feature in “Auto” mode, and seek guidance from VPA experts.

This feels like a good point to get feedback and guidance from k8s experts here, before we get too deep into VPA. Can you please review the following design document?
Best Effort In-Place Vertical Scaling

We are looking forward to your input, and to working with the community to get this upstream if this approach looks good to you.

Thanks
Vinay

CC: @XiaoningDing @pdgetrf

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 6, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@kisshore

This comment has been minimized.

Copy link

kisshore commented Dec 11, 2018

Hello @vinaykul ,
Is there any update of it ? is that document got approved?

@kisshore

This comment has been minimized.

Copy link

kisshore commented Dec 11, 2018

When docker supporting

#docker update

command , why not kubernetes?

@kisshore

This comment has been minimized.

Copy link

kisshore commented Dec 11, 2018

/remove-lifecycle stale

@hustcat

This comment has been minimized.

Copy link
Contributor

hustcat commented Dec 13, 2018

+1 for this feature

@erolosty

This comment has been minimized.

Copy link

erolosty commented Dec 17, 2018

+1 here too

@vinaykul

This comment has been minimized.

Copy link

vinaykul commented Dec 17, 2018

Hello @vinaykul ,
Is there any update of it ? is that document got approved?

@kisshore @hustcat @erolosty sig-autoscaling has taken the initiative and started a new merged KEP based on our design and the IBM proposal.

Based on my conversations with Beata / Solly during KubeCon this past week, we need the blessings of sig-node.

We are hoping to get the design finalized and then work on implementation, reviews, unit-tests, VPA integration (we have a working prototype based on our current design) etc..

CC: @DirectXMan12 @bskiba @derekwaynecarr @bsalamat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment