Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tooling/Infrastructure changes for branching at a X.Y.Z version (for zero-day and other emergency releases) #19849

Closed
david-mcmahon opened this issue Jan 20, 2016 · 14 comments

Comments

@david-mcmahon
Copy link
Contributor

@david-mcmahon david-mcmahon commented Jan 20, 2016

This is a top-level/meta issue that may spawn sub-issues.

The versioning.md doc has been recently updated to cover a branching a branch scenario. This is a rare or unlikely occurrence (depending on the teams velocity/needs), but important to plan for nonetheless. Refer to that doc for more details.

Any assumptions of a X.Y.Z format need to be dealt with in k8s tooling as well as GKE should we ever want/need to implement this.

@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented Jan 20, 2016

It should also be noted that should we need to go this route, we will want to switch to a 4 part version number to avoid any ambiguity (and support hassles) of a mix of 3 and 4 part versions.

@david-mcmahon david-mcmahon changed the title Tooling/Infrastructure changes for k8s and GKE to handle X.Y.Z.W versions Tooling/Infrastructure changes for branching at a X.Y.Z version (for zero-day and other emergency releases Mar 15, 2016
@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented Mar 15, 2016

The best route here to avoid modifications to the semantic versioning scheme we are currently using for K8S, is to update the tooling to handle branching from a X.Y.Z tag, but maintain the 3-part version scheme. The versioning.md doc will be updated with details.

david-mcmahon added a commit to david-mcmahon/kubernetes that referenced this issue Mar 15, 2016
…s, X.Y.Z.W.

Some minor formatting changes.
Add semver.org reference to file.

Ref kubernetes#19849
david-mcmahon added a commit to david-mcmahon/kubernetes that referenced this issue Mar 15, 2016
Some minor formatting changes.
Add semver.org reference to file.

Ref kubernetes#19849
david-mcmahon added a commit to david-mcmahon/kubernetes that referenced this issue Mar 15, 2016
Some minor formatting changes.
Add semver.org reference to file.

Ref kubernetes#19849
david-mcmahon added a commit to david-mcmahon/kubernetes that referenced this issue Mar 15, 2016
Some minor formatting changes.
Add semver.org reference to file.

Ref kubernetes#19849
@david-mcmahon david-mcmahon added this to the v1.3 milestone Apr 6, 2016
@david-mcmahon david-mcmahon changed the title Tooling/Infrastructure changes for branching at a X.Y.Z version (for zero-day and other emergency releases Tooling/Infrastructure changes for branching at a X.Y.Z version (for zero-day and other emergency releases) May 5, 2016
@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented May 9, 2016

Release tooling for branching and building complete in pending kubernetes/release#1.

TODO list for a zero-day:

  1. Initial branch of X.Y.Z (45 minutes)
  2. Testing of new branch @alex-mohr @roberthbailey @ixdy @fejta
    • Do we need a complete set of test infrastructure here or can we define some subset for this purpose?
    • Do we have any mechanisms for easily/quickly spinning up a set of testing infrastructure.
      Whether we do or not, what is the required setup time for something like this?
  3. Cherrypick and submit queue wait (up to N hours)
  4. Actual test time for new branch + patch (N hours)
  5. Cutting the final release from new branch (45 minutes)
@alex-mohr
Copy link
Member

@alex-mohr alex-mohr commented May 9, 2016

Thanks @david-mcmahon!

Internal to Google, the time from a PR to a built binary that could potentially be rolled out is I think a bit shorter than the TODO list you mentioned above. Would it make sense to start from e.g. a 1 hour PR-to-rollout target and see what we'd have to do to get there?

@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented May 17, 2016

@alex-mohr yes. For the sake of consistency the initial branch follows the current model of creating the tags and initial builds (and pushes). We could short-circuit that process to just branch and tag which would reduce the time significantly, closer to a few minutes.

Then the actual release with a build and push will take around 45 minutes.

The more variable pieces are how much testing we want to do and if we want infrastructure set up, how long it will take to spin it up and push a PR through it (submit queue -> jenkins' test suites)

This is all potentially variable depending on what the fix itself is, though I suspect we probably don't want to be figuring that out on the fly during the event itself and would rather just have our usual, complete infrastructure ready to go.

We can hammer this out in more detail in a PRD but I'd like to get a sense of a realistic target and skeleton plan here.

Testing is the big question mark here. Thoughts @fejta @ixdy @roberthbailey ?

@roberthbailey
Copy link
Member

@roberthbailey roberthbailey commented May 26, 2016

@david-mcmahon it looks like kubernetes/release#1 doesn't have a reviewer. Do we expect to get it submitted soon? Since it's in a different repository, this issue doesn't seem intrinsically tied to the 1.3 milestone, other than that we wanted visibility around getting a fix for this issue. Does it still belong in the 1.3 milestone?

@david-mcmahon david-mcmahon removed this from the v1.3 milestone May 26, 2016
@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented May 26, 2016

Correct, it is not directly tied to 1.3. Removed.

@philips
Copy link
Contributor

@philips philips commented Nov 29, 2016

@david-mcmahon I don't understand the difference between these releases and normal patch releases. It sounds like these are for emergency releases but I don't understand why they don't get released like normal x.y.z releases.

xref #35462

@david-mcmahon
Copy link
Contributor Author

@david-mcmahon david-mcmahon commented Nov 29, 2016

@philips Standard x.y.z releases contain cumulative patches on that branch.
The emergency branches are branched from a specific x.y.z release/tag.

@philips
Copy link
Contributor

@philips philips commented Nov 29, 2016

@david-mcmahon understood, got it, that makes sense. Just a little head spiny trying to read versioning.md.

I think we need to create better naming and categories for versioning.md overall. See my kubernetes-dev post.

xingzhou pushed a commit to xingzhou/kubernetes that referenced this issue Dec 15, 2016
Some minor formatting changes.
Add semver.org reference to file.

Ref kubernetes#19849
@kargakis
Copy link
Member

@kargakis kargakis commented Jun 11, 2017

/sig release

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Dec 27, 2017

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jan 26, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Feb 25, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants
You can’t perform that action at this time.