Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Minor vs major version upgrades #395

Closed
inc0 opened this issue Jan 13, 2017 · 11 comments
Closed

Minor vs major version upgrades #395

inc0 opened this issue Jan 13, 2017 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@inc0
Copy link

inc0 commented Jan 13, 2017

Hey,

So in OpenStack (as well as other more complex datacenter apps), upgrades can be two fold - minor and major versions. Minor versions (for example 3.0.1->3.0.2) should be done routinely, lightweight and basically should boil down to helm upgrade stable/openstack.

There is second type of upgrades, major version upgrades (3.0.2->4.0.0) which are major undertaking and cannot be done implicitly or accidentally by just calling helm upgrade. This type of upgrade should be explicit.

We would like to start discussion on how to achieve this goal in kubernetes/charts. Distros usually host separate repository per major version and code is branched out whenever stable version is released (following that we leverage tags to host multiple minor versions within same major version). Therefore release upgrade starts with changing repository configuration (and thus preventing accidental major upgrade).

@kfox1111
Copy link
Collaborator

Yes. Our site follows this methodology for most of our major apps. OpenStack in particular, getting an unintended major upgrade would be disastrous. Getting timely minor releases that fix security issues is critical though.

@viglesiasce
Copy link
Contributor

You can pin your helm upgrade command to a particular version using --version.

@inc0
Copy link
Author

inc0 commented Jan 13, 2017

That's not exactly what we're referring to. Use case for ubuntu would look like that:

apt-get upgrade openstack-nova => easy upgrade between minor versions, to quickly fix something like security issue.

If you want major upgrade, there is additional step required - changing repo. We should have something similar to that imho.

@viglesiasce
Copy link
Contributor

I'd argue you should never do apt-get upgrade openstack-nova you should always do something like apt-get upgrade openstack-nova=0.16.1-1.

What exactly are you asking for? Or is it a helm change?

@inc0
Copy link
Author

inc0 commented Jan 13, 2017

Well, that's not true. Even if you do apt-get upgrade nova==0.16.1-1 nova can have dependency on oslo >0.15, and newest version of oslo in flat repo would be 1.4 - mutually incompatible and breaking. If you have repo with oslo 0.x.x it will always upgrade to newest version within compatible brackets. You could argue nova should pin oslo version to say 0.15, but then if oslo releases 0.16 (for example security upgrade on stable branch), your dependencies are broken until nova, and all other libs depending on oslo, bumps oslo version, so more libs depend on oslo, upgrades becomes exponentially harder...

@viglesiasce
Copy link
Contributor

Right the dependencies should be pinned across the board.

So in that case i would be issuing apt-get upgrade openstack==0.16.0 and letting it resolve the dependencies.

Still confused though on what we should do as a project (charts) and repos (stable vs incubator).

@kfox1111
Copy link
Collaborator

this is assuming a single package is being upgraded. The use case I'm interested in is more of a "helm upgrade" similar to a "yum upgrade". a, find all upgrades relevent to releases I have and perform them. an OpenStack deployment may be made up of more then one release and upgrading them together would be helpful.

@kfox1111
Copy link
Collaborator

One possibility that was mentioned on the slack channel was having metapackages. make it easy to have an openstack-mitaka package in kubernetes/charts that when installed would add a repo to the list that would contain the mitaka version of the helm packages.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants