Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl rolling-update should not exitcode=0 when resumes an update #9045

Closed
justinsb opened this issue May 31, 2015 · 3 comments
Closed

kubectl rolling-update should not exitcode=0 when resumes an update #9045

justinsb opened this issue May 31, 2015 · 3 comments
Labels
area/app-lifecycle area/kubectl priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@justinsb
Copy link
Member

Consider this:

> kubectl rolling-update A --image a:v2

Control-C half way through update

> kubectl rolling-update A --image a:v3
Found existing update in progress (X), resuming.
Continuing update with existing controller (X).
Update succeeded. Deleting old controller: X
Renaming X to X

So the update to v3 succeeded (I think), but the update was actually to v2.

I think it should update to v3; ideally by co-opting the existing rolling-update and repointing to v3. If we can't do that, it should completing the existing rolling-update, then rolling-update to v3. If we can't have that, it should exit-code != 0.

@bgrant0607 bgrant0607 added area/kubectl sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 31, 2015
@bgrant0607
Copy link
Member

This is what I call "rollover". There are actually 3 versions in play in the above scenario: v1, v2, and v3. Each requires its own template, and thus its own replication controller. If the update were killed and restarted again with v4, that would be 4 versions/RCs. We'd need to change the way we keep track of the set of RCs that need to be replaced in order to handle this. Background is in #1353.

In the meantime, we should issue an error in the case that the image doesn't match.

@erictune erictune added this to the v1.0-post milestone Jun 1, 2015
@JasonGiedymin
Copy link
Contributor

Based on the requirements and mostly the comments, I thought that a cancel such as this should trigger a stop pause and let me rollback. I'd expect that a re-issue to v3 take me from 1 to 3 (since I did cancel a v2 update), or force me to make a choice. That resolution is important because the state of my replicas as defined (say I had a min count) would be inconsistent from a configuration management point of view. Trying to understand more about the issue and contribute.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 removed this from the v1.0-post milestone Jul 24, 2015
@bgrant0607 bgrant0607 added team/ux and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Aug 4, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/kubectl priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

4 participants