-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automate cherry-pick batching process #23347
Comments
An additional thought/idea on how to manage patches in batches. #23345 (comment) |
I think the existing script can handle multiple PRs, but there are some technical issues with how it cherrypicks. |
The script works ok. It gets the .patch files from github and applies them. This means that the actual merge commit on 'master' is not reflected in the 'release' branch. The shoddy tools I hacked together around cherry-picking looked for the merge commit on the 'release' branch because I'm not sure if I should comment on @roberthbailey / @zmerlynn thoughts here or in #23345, but I'm lazy so I'll do it here. Even during the 'slush' I probably had 25% of PRs which would not apply. EVERY case was a conflict in some generated file or a conflict involving docs. While the batch picker can certainly be smart enough to handle those cases, I think they should be solved elsewhere. I believe that automated tooling with have << 75% success rate with our current setup. Let me pontificate a moment on what the RHEL kernel team does. We have similar issues and needs. The team constantly 'backports' patches that landed in the 'devel' branch to some 'older' branch. Our internal process is that developers need only commit to the 'devel' branch and verify that the appropriate hoops have been jumped through to indicate the patch should also be applied to the 'older version'. In actuality those hoops aren't the developers problem, but that's not important. We then have a team of people who try to do the backport. If the backport is not easy and obvious or if the result fails testing they pretty quickly give up. They inform the original developer that THEY must do the backport themselves. This has the result that most of the time key people are able to completely ignore the backport process. But if things are hard they must do it themselves. So while I'm being selfish and not watch to do the batch updates myself, I think having a junior person attempt to do batch updates will be beneficial for the project. I think that if the backport is difficult or beyond the skills of the backporter only then should the original developer be required to do it themselves. I think 1.1 had some good things (the people who knew the patch got it working both places when needed) and 1.2 has some good things (most of you were able to just keep working and not worry about it). But, I'd suggest a hybrid approach... |
The process I described in #23345 (comment) I implemented here at Google for our kernel team many years ago and it was a huge success. Developer/contributors were very happy with the notification mechanism and the workflow that allowed them to target changes to multiple branches and have the resolver deal with figuring out when order or refactoring was an issue and only asked them to intervene when necessary. I don't have specific stats, but it was certainly less than 20% changes targeted notified a developer to resolve by order or refactor. |
Why do you think batching is a reasonable process after the release has shipped? I think it made sense prior to 1.2.0 but makes less sense now and as the branch ages and conflicts are inevitable. Are you going to push conflict resolution onto the batcher? That seems wrong. |
Ah, I see this thread is split all over the place. Maybe we need a face to face. |
@david-mcmahon The batched cherry picks were great during the time between the 1.2 branch cut and the 1.2 release. I expect we will do the same for the upcoming 1.3 release. Do we want/need to implement automation in the next 2 weeks to make it easier? |
@roberthbailey @eparis if we can capture the basic skeleton procedures here, I can product-ize a script that could be run more easily by more people and get @eparis out of the critical path for this. |
Is there still work planned here, or should this be closed? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
bump |
Since 1.2 we have moved away from individual cherrypicks and instead batch them up. This should be turned into a button/lever/whatever. Some kind of automated (I don't care about the details) mechanism that any branch 'owner' can make use of easily.
cc @bgrant0607 @eparis
The text was updated successfully, but these errors were encountered: