-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider doing release candidates #10882
Comments
You might be interested in some of the discussion here: https://discuss.python.org/t/community-testing-of-packaging-tools-against-non-warehouse-indexes |
I think the success of this approach deapends on the group of users who will be willing to test such released RC. I am already subscribed to the project notifications, but this might slip through the cracks of about few hundreds of notifications I get a day, but for example we could build such a list o users who can get notified by creating an issue in GitHub and marking them similarly as we do in Airflow when we prepare releases (and in our case we automate such list generation). Example here: apache/airflow#20208 Happy to help also with automating such list preparation if needed. |
Maybe we could get here the list of such people who would also be willing to help ? Maybe commenting or "hands up" might be a statement of intent. |
I'm still not clear why anyone interested couldn't simply install from master to do the testing. At least as an initial approach, that would give us a chance to get a sense of how much use this sort of pre-release testing would be. (And frankly, it would help the pip maintainers to convince ourselves that pre-releases aren't simply wasted effort, which is the lesson we learned from our previous ventures down this road...) |
I think it's because it's much easier and you want to reduce friction (so that people do it more willingly). In our case the reason is rather simple. Running ful suite of tests that involves many test cases is as simple as changing version here: https://github.com/apache/airflow/blob/main/Dockerfile#L51 and making a PR. For others, there might be other reasons: people in corporate environments might simply have git access blocked or not allowed by their corporate rules/firewalls. Also by organising people around "planned" release and publishing RC and reaching out to them "hey I need your help" you can build stronger social bonding and engagement of people about doing something "together". This is IMHO the most important reason - you stop treating everyone as a individual but try to organise them around common goal. This is a basic principle why communities exist in the first place. So I think there are very good reasons - both technical and social why "hey you random people, test whenever you want from main" is much worse than "hey, I need your help to test this release candidate in PIP we worked on for the last few months and we need your help". |
BTW. I know it might sound strange, but I am pretty good in such community organisation, and if only I could get part of it, I am also happy to volunteer part of my time and try to help with engaging people - once you decided to try it. I am really, the last person to just say "do this, because you should". |
People shouldn't just pin and leave it at that. They should pin and enable something like Dependabot - which gives the best of both worlds: Perhaps documenting these best practices more clearly would also help? |
I think the willingness to do this might be related to how easy it is to use the preview version. If there's an rc released:
Versus "I'm developing in Python for 15 years, but I have no idea how to even start trying to install pip from Also just using "main" feels too fragile even for development environments. There's no guarantee that even automatic tests were run. I believe that making a rc release does not have to be time consuming. I think, that using https://docs.github.com/en/rest/reference/releases it is possible to create it with just a simple script with 2 parameters: version_number commit_number. |
Really? Our policy is that main should always be in a releasable stage and we keep our CI green basically all the time (barring breakages that we've not had time to investigate). Anyway, I'm fine with this as an idea. I think there's value here but someone has to do the work of communicating about this and actually setting up the broader culture/group of users who'll actually test things. |
I'm pretty sure that it's It's also documented under "VCS support" in our documentation. |
If you are ok for that, I am happy to help with the next bigger release you plan. I can also add some automation around it and gather the list of people/issues and prepare some communication templates. |
Given that we're currently working almost entirely on assumptions and instinct at the moment, could you add into that some work to collect some metrics, to allow us to evaluate the benefits more objectively? Things like how many people actually did test, what type of users were they (for example, did we get any testers from the sorts of corporate infrastructure that were hit by the HTML5 issue), were any bugs filed/fixed as a result of the RC. Otherwise we could end up with extra process, but no certainty that we're gaining anything from it. I'm still not convinced RCs will help, but I'm willing to be persuaded. |
I think some of that can be done (i.e. numbers of people and types of issues they tested for example) -> this might let us extrapolate on types of users that did test. However if we want to keep the "low effort" of it - some metrics cannot be easily achieved. Basically anything that requires to provide structured information from the users that will be doing tests except "Tested, works" is defeating the purpose of low-friction from users and some of that "low-effort" from maintainers. I am sure we can run it as "an experiment" and try to come up with some more metrics as we do it - but treating that as an experiment, we should likely start with something very simple and add more as we progress (possibly working out more metrics as we learn. We have no baseline currently so we will never know if things improved or not "for sure". But I am sure if we do few releases as an experiment we will be able to get some numbers (and better every time we do it). That's the usual way I work in "scientifical" way: Improve -> Observe -> Measure loop repeated many times is far more efficient than trying to work out things upfront. |
If there is a general consensus - I am happy to spend some of my time and produce an example issue/approach that could be used in the next release based on the past ones. Could be good thing to see if this is a workable first step. |
Spending a few minutes to note stuff/thoughts pertaining to this topic: Overall, I do feel that the solution here is not adding release-time processes (or automation, we have enough of that), but rather than making changes via the more gradual change management mechanisms we have already. We didn't use it for one change in this release -- and as far as I'm concerned, I believe that was the issue with 22.0. That isn't to say that I think pre releases are a bad idea -- there's some version of this where a pre-release would've caught the main issues with 22.0. Rather, I don't think we'd get to that level of testing prevelance without significant effort, and I'm pretty sure we'd be slightly worse off if the effort to set this up is abandoned halfway. If someone's willing to help put down effort toward that, then, they're welcome to! I'm wary of asking someone to pick this up tho. :)
Notably, our experience with pip 10.0 is part of the hesitation here -- we had multiple betas, a bunch of announcements and a decent amount of communication around the release that we knew had a fairly significant change. It still hit a lot of issues because multiple projects did not care about that change, until we cut a stable release. It was still painful overall. |
I understand the hesitation and bad experiences, but I think if we try it as an experiment, as you mentioned, there is not a lot of overhead on preparing betas (happy with the name, no problem). I think a lot has changed since 10 version (2018):
I do not know exactly what you've done in 2018, but I hink also there is a difference here in the way I propose to reach out. I do not want to reach out "in general" "mailing lists" etc.. I want to reach to specific people - individually. Those who were involved in issues and PRs of What works in this case is building the "mutual expectations" on "individual" not "corporate" level. This is pure psychological effect. If someone wanted something from This has also much more important effect that is reinforcing the exact behaviours you were complaining about (that they are lacking). I believe you complained that "users" are only complaining but they are not actually helping or paying money to keep things runnig, By specifically reaching out to those people "individually", And even people who will say "I do not want to be on that list any more" - those will have completely no rights to complain. They - voluntariliy - declared this way they have no interest in This is just psychology and trying to build the right "mutual" expectations and build a process where there is a psychological "self-reinforcing" effect. With every iteration, there will be more people who not only be more engaged but also will be painfuly aware of the consequences of their own inaction. |
At the end - it's |
I calculated some stats based on what we have done in Airflow:
Some summary. The stats cover period June 2021 - Feb 2022. In total we had 15 releases, of 350 provider packages. All those provider packages solved 612 issues. Out of those 612 issues total 264 have been tested (which means our community tested 43% of all issues - features, bugfixes). The ratio "per-provider" tests was much higher (hard to say without some more "complex" logic). There were 196 people involved, out of which 88 actively took part and commented their issues (either with "yes works" or "no it does not work"). Which is 44% "response rate". There were 4 people who commented who were not involved (this basically means that in this 15 releases only 4(!) people commented on the issue when they were not directly mentioned in the issue). I leave it here for considerations - this is the stats we have in Airflow, so it might not be relevant for |
We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing.
…28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing.
…28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. (cherry picked from commit 6cbf9b6)
…#28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. GitOrigin-RevId: 6cbf9b62e374666693b93f79a36dcb46cc8245c4
…#28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. (cherry picked from commit 6cbf9b62e374666693b93f79a36dcb46cc8245c4) GitOrigin-RevId: 0df6a336a26ca858e04b9a74b0c9185fae38f5e3
…#28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. GitOrigin-RevId: 6cbf9b62e374666693b93f79a36dcb46cc8245c4
…#28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. GitOrigin-RevId: 6cbf9b62e374666693b93f79a36dcb46cc8245c4
…#28697) We've only supported to install `pip` from released packages with a version number, but since `pip` does not support RC candidates (as extensively discussed in pypa/pip#10882) we cannot use the release versions to do that. We still want to help `pip` maintainers and be able to test the versions they release as early as possible, so we add support to install `pip` in our toolchain from a GitHub URL. That will allow us to test new `pip` version as soon as designated branch of `pip` will contain something resembling a release candidate ready for testing. GitOrigin-RevId: 6cbf9b62e374666693b93f79a36dcb46cc8245c4
What's the problem this feature will solve?
This is a very popular project and used by millions. Still, it is possible that some major version might break functionality or interoperability with other software. This is normal and expected, but when it happens it is stressful for the project users and also for its developers. Also adds hours of work for many people analyzing and fixing their broken builds or project owners developing fixes during inconvenient time.
Describe the solution you'd like
Please consider releasing a -rc1 version a few days before releasing a new minor or major version.
For example, a few days before releasing 22.1.0 please release 22.1.0rc1. So the "pip install --upgrade --pre pip" will install it. Then some users might use "--pre" it for their development builds and might notice any error, report it and allow to fix it in 22.1.0rc2 and so on. When 22.1.0 will be released there's higher probability that there won't be any breaking changes.
Alternative Solutions
Some might prefer to pin pip version to a known working version instead. But this is not a good solution, as this program interacts with online service, and might just break because of the service changes. Also it makes people use unmaintained version after months and years. And passing time makes it harder and harder to upgrade, while skipping over many major releases, further pushing people to use more and more outdated releases.
Additional context
I think if there was a release pip-22.0.0rc1 a few days before pip-22.0.0, and documented recommendation to use "--pre" in development builds, errors related to #10825 might have been avoided. And it was stressful for many, at least based on comments tone there.
Code of Conduct
The text was updated successfully, but these errors were encountered: