-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Closed
Labels
bugSomething isn't workingSomething isn't workinggha-runner-scale-setRelated to the gha-runner-scale-set modeRelated to the gha-runner-scale-set modeneeds triageRequires review from the maintainersRequires review from the maintainers
Description
Checks
- I've already read https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors and I'm sure my issue is not covered in the troubleshooting guide.
- I am using charts that are officially provided
Controller Version
0.10.1
Deployment Method
Helm
Checks
- This isn't a question or user support case (For Q&A and community support, go to Discussions).
- I've read the Changelog before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
To Reproduce
- Deploy k3s cluster as described here, configuring
containerMode: type: "dind"
- Upload a container image to
ghcr.io
and create a workflow using it. - Start the workflow
Describe the bug
The workflow gets to the "Initialize containers" stage and logs:
latest: Pulling from synthara/csdk-base
7478e0ac0f23: Pulling fs layer
...
f956ae08c01e: Verifying Checksum
f956ae08c01e: Download complete
It is then stuck there indefinitely.
Describe the expected behavior
The workflow runs through.
Additional Context
minRunners: 1
maxRunners: 5
containerMode:
type: "dind"
Controller Logs
https://gist.github.com/Time0o/eb8df78ff17d9126939140750a99ff15
Runner Pod Logs
Here are the logs of the runner in question, the last three lines repeat ad infinitum:
https://gist.github.com/Time0o/97cb65fd92fb267d291ee62d2e2bb51e
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinggha-runner-scale-setRelated to the gha-runner-scale-set modeRelated to the gha-runner-scale-set modeneeds triageRequires review from the maintainersRequires review from the maintainers