You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In v1, our workflows automatically publish a new PyPI release when a commit is pushed to master that has a new version in it, but only after the building / testing step completes (see here).
In v2, we currently have two separate workflows (test and publish), and the publish step appears to run when we add a new tag (i.e. when we make a release through GitHub), regardless of whether tests have passed.
I noticed this while releasing both 1.7.0 and 2.0.0-rc.1 today. For both versions, we have tests failing for some OS or Python versions (for reasons we plan to fix later - I know this is not ideal). In the case of 2.0.0-rc.1, the PyPI release was uploaded anyway as soon as I made the GitHub release / new tag. The PyPI release of 1.7.0 has not appeared yet because the tests are still running, but I anticipate that it will not be uploaded because the Mac OS test will likely time out after a few more hours.
Is one of these the more correct or safer way to do it? Does it matter? Any thoughts @JacksonBurns, @davidegraff, or others?
The text was updated successfully, but these errors were encountered:
I'd prefer we move over to a model like this rather than manually copying linting/syntax-checking/testing code between actions. That is, our CI/CD pipeline will contain several workflows and we should be able to define their dependencies modularly rather than just manually copying workflow definitions to implicitly define the dependencies. We should open up a larger discussion, but right now I can think of the following workflows with corresponding triggers:
build: check that our code is free of syntactical errors
publish: push a new version to PyPI (and Anaconda potentially)
on every new tag (or however we define a release)
main will be a living branch that will actively receive new PRs frequently and should reflect the cutting edge of features, but not every new push should correspond to a new PyPI release. I personally use tags to logically define the concept of a "release," but I'm open to other ideas here (e.g., push to releases/** or something like that)
the following dependency DAG should look like this:
build -> lint: code can't be linted if it's not valid code
build -> test: code can't be tested if "..."
test -> publish: code should not be published if it's not passing tests
Seconded that the automated builds should not run if the tests/formatting/etc. does not pass. I'm thinking "if we are pushing code to PyPI that does not pass our own tests, there better be a very good reason and that should require manual intervention".
I have something similar to what is described above implemented here, which gives you a nice job summary like this.
In v1, our workflows automatically publish a new PyPI release when a commit is pushed to master that has a new version in it, but only after the building / testing step completes (see here).
In v2, we currently have two separate workflows (test and publish), and the publish step appears to run when we add a new tag (i.e. when we make a release through GitHub), regardless of whether tests have passed.
I noticed this while releasing both 1.7.0 and 2.0.0-rc.1 today. For both versions, we have tests failing for some OS or Python versions (for reasons we plan to fix later - I know this is not ideal). In the case of 2.0.0-rc.1, the PyPI release was uploaded anyway as soon as I made the GitHub release / new tag. The PyPI release of 1.7.0 has not appeared yet because the tests are still running, but I anticipate that it will not be uploaded because the Mac OS test will likely time out after a few more hours.
Is one of these the more correct or safer way to do it? Does it matter? Any thoughts @JacksonBurns, @davidegraff, or others?
The text was updated successfully, but these errors were encountered: