-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strengthen Cherry Pick Guidance #7634
Comments
I spent some time looking to see if there were industry standard defect classes like data-loss, security, performance hoping maybe we could put together a bit of taxonomy and decide which classes are valid for cherry picks. However, it seems like most alignment is around severity like critical, major, minor, etc where the highest severity issues generally include security, data-loss, and often performance regressions. I think the recommendations from the first bullet point are a great place to start. Assuming we want labels for these, |
does it mean that unless labels( like |
I'm not sure it becomes a requirement, initially, but I believe having the labels available to track the criteria that are part of the consideration in cherry-pick approval would be helpful. As written the description just says that the author would self affirm it's one of the approved fix classes or provide other reasoning why the change needs to be backported so that the release team can make more informed decisions. |
note that sometimes a fix for a user blocking bug could be eligible for a backport, but today that is just a instead of working with the current set of labels we could create a new family of labels at the same time, ETOOMANYLABELS. |
one other category of changes that seem acceptable for backports came up - 1.29.0 accidentally enabled an alpha capability by not applying the feature gate correctly... fix to apply the gate correctly is in kubernetes/kubernetes#122343 That's sort of the opposite of a regression... it's an accidental progression of functionality intended to be gated to roll out in a controlled way. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
Describe the issue
The current cherry pick guidance has fairly clear criteria on what kind of PRs are good for cherry picks, but it seems like we could maybe enhance our guidance, for release managers reviewing cherry picks, for contributors opening cherry picks, and for SIG Leads reviewing cherry picks.
At the wg-lts meeting on November 21st, @liggitt presented a pretty thorough analysis on regressions introduced into patch releases through cherry picks, which are available here:
Kubernetes patch release regression/bugfix rate
Analysis of Kubernetes regression rates, patterns, examples
There was a pretty important take away: Every single minor version has had a backport cause a regression.
In the wg-lts meeting, we discussed a few concrete things:
The first bullet point is basically already expressed in our existing guidelines, so perhaps we should investigate a new PR template for cherry picks that includes a self-attestation from someone opening a cherry pick? This could also include a section to better indicate when a bug was introduced to better help understand if a bug fix should actually be cherry picked back to a release if the bug already existed prior to the .0 of that minor release.
/sig release
/kind documentation
The text was updated successfully, but these errors were encountered: