New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try out PyPI's trusted publishers #2980
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2980 +/- ##
=======================================
Coverage 99.64% 99.64%
=======================================
Files 117 117
Lines 17594 17594
Branches 3171 3171
=======================================
Hits 17532 17532
Misses 43 43
Partials 19 19 |
Windows 3.8 CI failed: https://github.com/A5rocks/trio/actions/runs/8489525533/job/23259717631#step:4:2288 Not gonna debug here but just keeping track |
I have this setup on my own project, it's real neat. Once built and tested the run pauses, you get an email, and can then press the approve button. The security settings are to do with the "environment", in the repo settings you can configure who's allowed to authorise the deploy. |
Actually I don't have the settings tab at all, that's fine though. The important transparency bit is that everyone can see the code that's being run, nobody's able to fiddle with the archive. In regards to the CI config, I think it's better to split it up into two jobs - one to make the archive, one to do the upload. You'd use artefact upload/download to communicate between. This way, the sdist gets fully built (and can be inspected) before you give out permission to publish. And it lets you partition the permissions, so minimal code has to execute with access to the PyPI token. See the cibuildwheel setup as an example of that way. |
Alright, I separated out the build step. We'll need to test this a bit but hopefully it's fine -- after all most is copied from that example! I'm not sure how much more secure this is, but might as well do it while I'm here. (After all the only dependencies we ever even touch should be |
Indeed it's probably not too critical, but it would protect against us adding something else in the future. We could also say add a test step in between to verify they install + import. Probably more relevant on a non-pure Python project though. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@A5rocks check out my PyPUG guide @ https://packaging.python.org/en/latest/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/ — it also shows how to publish Sigstore signatures and GitHub Releases that are useful for provenance.
I usually prefer the release workflow to be combined with testing. I put the dist creation job before testing and then, I can test those specifically and release exactly the same thing that was tested as opposed to releasing some artifacts built separately in hopes nothing broke since the test run and the artifacts are similar enough...
My earlier effort in #2541 aligns perfectly with such an approach. But I understand that you want something simple for now so the inline comments I made are what I recommend focusing on and adjusting.
Indeed it's probably not too critical, but it would protect against us adding something else in the future.
The attack surface in this case is not what you think. I was the one to insist on telling people in all the guides and examples, that the jobs should be separated. It's exploitable through the build deps poisoning. If an attacker is able to sneak some code in, they'd be able to access OIDC and whatever that can reach. And that's not only PyPI. Imagine that at some point in the future you set up OIDC for AWS or something else — those would be the likely targets in such a case. This is even less obvious which I why I always point out to people that the separation of concerns and the least privilege principle are important.
- uses: actions/upload-artifact@v4 | ||
with: | ||
name: trio-sdist | ||
path: dist/*.tar.gz | ||
|
||
- uses: actions/upload-artifact@v4 | ||
with: | ||
name: trio-wheel | ||
path: dist/*.whl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I don't think there's a case for putting the dists of a pure-python project into two separate GHA artifacts. Perhaps, you could combine them in a single step?
label: | ||
types: [created] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Label? Looks like this is going to be triggered on any label creation? Perhaps, there's a case for sticking some filtering on the job level?
needs: [build] | ||
name: upload release to PyPI | ||
runs-on: ubuntu-latest | ||
environment: release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This refers to a GitHub Deployment Environment — a concept corresponding to some deployment target, like PyPI or TestPyPI. So I always recommend naming it
pypi
in all my guides. Plus William and I are trying to correct this in other places like GitHub's own docs. - It's possible to show nice URLs in places where GH shows deployment-related stuff. I like using full versioned URLs when there's an easy way to retrieve the version. But at least something like this would do:
environment: release | |
environment: | |
name: pypi | |
url: https://pypi.org/p/trio |
git clean -xdf # maybe run 'git clean -xdn' first to see what it will delete | ||
python3 -m build | ||
twine upload dist/* | ||
* approve the release workflow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: it's not the entire workflow but the publish job of the release workflow that will be paused until approved.
Re: your PR about testing the sdist/wheel, I have a few concerns and I didn't want to think too hard about it yet:
I had some other stuff but I forgot. And now that I think about it I think we could use that And I was the one to bring up threat models for building dependencies cause I don't quite see the point of separating out building (for trio, atm, specifically); I don't see why not though so might as well do it if others think it's good! Your PR feedback makes sense, I'll do that. |
There are a few ways this works. First of all, you can "chain" workflows so one runs after another. In this case, you cannot share state/data between those. I found that a nice way to apply this in practice is having a "top-level" workflow that lists the event triggers and dependencies between different processes but the processes themselves are declared in reusable workflows. You can pass inputs into such workflows and they can set outputs. These "modules" don't list normal events as triggers.
The
Yeah, but the case I describe targets things beyond the Python ecosystem and is therefore more dangerous. Another reason is that you need to have the publishing part protected and conditional. But it's a good idea to have a smoke test building and testing the artifacts in PRs and on pushes. You can also check the metadata with |
This probably won't work first try, but I think this is a good improvement! Since I used to think this was neutral or outright bad, here's a few reasons:
build
, etc. It's not hard, just annoying.Also I don't think automating other parts of the release process is worth the effort. Just this.