-
-
Notifications
You must be signed in to change notification settings - Fork 103
Not planned
Description
Dear developer,
Since the release of 1.12, our (self-hosted, rootless docker) workflow started failing with:
/opt/conda/bin/python3: can't open file '/home/githubrunner/actions-runner/_work/_actions/pypa/gh-action-pypi-publish/release/v1.12/create-docker-action.py': [Errno 2] No such file or directory
Which seems similar to #291 (the error), but wasn't fixed with the release of 1.12.2
Here's the failing workflow run
https://github.com/SBC-Utrecht/pytom-match-pick/actions/runs/11778522196/job/32805113391#step:7:58
Please let me know if we can help tracking down this issue.
nikaro
Metadata
Metadata
Assignees
Labels
No labels
Projects
Milestone
Relationships
Development
Select code repository
Activity
[-]v1.12 still failing with error similar to 291[/-][+]v1.12 still failing with error similar to #291[/+]rjdbcm commentedon Nov 14, 2024
Confirm 1.12.2 still fails for my nested composite action.
webknjaz commentedon Nov 15, 2024
@sroet have you tried checking the contents of
/home/githubrunner/actions-runner/_work/_actions/
? That path is constructed using${{ github.action_path }}
which is supposed to contain a checked out copy of the action, which in turn would havecreate-docker-action.py
in it. And since it's not there, I'd assume a bug in the GitHub Runner software.Also,
/opt/conda/bin/python3
is untypical for usual GH Runners too.ferhatys commentedon Nov 15, 2024
I'm getting the same issue for some reason.
ferhatys commentedon Nov 15, 2024
Confirmed
release/v1.11
is working for me as a workaround.sroet commentedon Nov 15, 2024
Hey @webknjaz, thanks for looking into this!
To repeat: we are running this on self-hosted runners inside a rootless docker, using the continuumio/miniconda3 image.
That image has the default conda installation in
/opt/conda
, this is also where the/opt/conda/bin/python3
comes from.Now for
/home/githubrunner/actions-runner/_work/_actions/
:That directory exists on the host system of the runner, but looking at this line of the setup, specifically:
it seems to get mounted at
/__w/_actions
instead.I don't know enough about creating actions to know if this is a bug in the Runner software or if these variables that are not meant to be used inside actions after the container creation
webknjaz commentedon Nov 15, 2024
Yeah, I understand, but can you check what's actually on dist in the runner host and within container (just add some recursive
ls
or something at the beginning of the job).sroet commentedon Nov 15, 2024
@webknjaz, started a debug workflow in SBC-Utrecht/pytom-match-pick#241. Should we move any back and forth discussion about what folders / variables you want to check to there?
nikaro commentedon Nov 18, 2024
@webknjaz this is a known issue, the workaround seems to be using
$GITHUB_ACTION_PATH
instead of${{ github.action_path }}
.ci: fix pypi publishing job
ci: fix pypi publishing job
ci: fix pypi publishing job
${{ github.action_path }}
to${GITHUB_ACTION_PATH}
#304sroet commentedon Nov 19, 2024
To report back to here;
After some debugging we indeed ran into the issue mentioned by @nikaro and I opened #304 to implement the proposed solution.
For my project however, I decided to restructure my release workflow to follow to current best practices around not having the
id-token: write
at build time and also sidestep the container issue. In case anyone else is interested this is the current workflow file.In general; the workflow now has 4 jobs each depending on the previous one to succeed, with the type of runner in
[]
:twine check
, upload artifactI am fine closing this issue, but maybe it is nice to keep open until a decision has been made on #304
virtuald commentedon Nov 22, 2024
This is broken on github actions runners that use containers. Using release/v1.11 as a workaround.
webknjaz commentedon Dec 7, 2024
@sroet I wanted to comment on this point. This is not directly related to the action, but I have an opinion to share. Relying on TestPyPI may be dangerous since the project owners are not the same as on PyPI — it's easy to fall a victim of a dependency confusion / poisoning when somebody registers a project with a name of one of your transitive deps. Another bit is that due to caching in PyPI's CDN, the newly published dist might not be available immediately, and it may take 10 minutes for the resolver to start “seeing” it.
For these reasons, I recommend just downloading the same dists from the GHA artifact and performing the tests with that. You don't really need to test that pip is able to talk to some other index than PyPI which is mostly what this workflow implies.
In my most sophisticated workflows I have this structure where the very first job builds the dists, they are stored and then the tests run against them and only after that, there's conditional bits for publishing and signing. There's really no need to test how well an installer talks to an index. The thing to focus on is testing your project itself.
Here's an example: https://github.com/ansible/awx-plugins/blob/e7d5733/.github/workflows/ci-cd.yml. I even made an action to substitute
git clone
with testing from an sdist that I've been using in a few places, in almost all the jobs that follow the build one: https://github.com/marketplace/actions/checkout-python-sdist.webknjaz commentedon Dec 10, 2024
So, I've recorded this as unsupported in README: https://github.com/marketplace/actions/pypi-publish#Non-goals. I'm going to close this issue with understanding that the PR remains open in case @sroet gets to it, and we'll decide whether to merge it separately.
ci: pin pypi release action version
twine
to upload wheels rapidsai/shared-workflows#286Fix release workflow
Try to fix issues related to v1.12 in pypa actions.