New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Actions-Runner-Controller support for Gitea Actions #29567
Comments
k8s hooks are technically (means there is no documentation, the docker compose examples use dind + docker hooks) already usable with Gitea Actions see this third party runner adapter https://gitea.com/gitea/awesome-gitea/pulls/149 Actions-Runner-Controller would require emulation of a bigger set of internal githib actions api I actually find this interesting to reverse engineer that product too, but I never dealt with k8s myself.
|
Interesting. I wasn't aware you could change the runner implementation just like that. Def will look into it. However given what you said about DinD still being a requirement I don't think it will change much (we already have our runners on K8s with DinD using a adopted version of gitea/act-runner for k8s but as mentioned, this comes with many headaches). The goal IMHO would be to be able to start workflows on k8s directly. Possible implementations:
Option one (every job is it's own pod) seems like the most promissing option in my opinion. |
I meant, I didn't create any k8s mode examples / actually tried it yet. Sorry for confusion here. The docker container hooks only allow dind for k8s. While the k8s hooks should use kubernetes api for container management, I still need to look into creating a test setup running. I can imagine
Well not using act_runner has limitations when you try to use Gitea Actions Extensions (using features not present in GitHub Actions) I think option 1 is more likly to happen than option 2. Job scheduling is based on jobs not on workflows. |
k8shooks works for me using these files on minikube (arm64) actions-runner-k8s-gitea-sample-files.zip
With clever sharing of the runner credentials volume, you could start a lot of replicas for more parallel runners This works without dind Test workflow on: push
jobs:
_:
runs-on: k8s # <-- Used runner label
container: ubuntu:latest # <-- Required, maybe the Gitea Actions adapter could insert a default
steps:
# Git is needed for actions/checkout to work for Gitea, rest api is not compatible
- run: apt update && apt install -y git
- uses: https://github.com/actions/checkout@v3 # <-- The almost only Gitea Extension supported
- run: ls -la
- run: ls -la .github/workflows The runner-pod-workflow is the job container pod, running directly via k8s. |
Looks promising. I'll give it a shot and share my findings. |
Okay, so... there seems to be some issues with the current setup. Let me share my findings:
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: secret_name
key: secret_key and creating your secret with (take care: K8s is case sensitive): apiVersion: v1
kind: Secret
metadata:
name: secret_name
type: Opaque
stringData:
secret_key: "s3cr3t" You shouldn't start pods in K8s directly but rather wrap them into a higher level resource such as a deployment which will make it benefit from the (deployment) controller logic when updating or self-healing the pod. I did that so the result looks something like this: apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: runner
name: runner
spec:
replicas: 1
selector:
matchLabels:
app: runner
template:
metadata:
labels:
app: runner
spec:
strategy:
type: Recreate
restartPolicy: Always
serviceAccountName: ci-builder
#securityContext:
# runAsNonRoot: true
# runAsUser: 1000
# runAsGroup: 1000
# seccompProfile:
# type: RuntimeDefault
volumes:
- name: workspace
emptyDir:
sizeLimit: 5Gi
containers:
- name: runner
image: ghcr.io/christopherhx/gitea-actions-runner:v0.0.11
#securityContext:
# readOnlyRootFilesystem: true
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
volumeMounts:
- mountPath: /home/runner/_work
name: workspace
env:
- name: ACTIONS_RUNNER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "true"
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /home/runner/k8s/index.js
- name: GITEA_INSTANCE_URL
value: https://foo.bar
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea
key: token
- name: GITEA_RUNNER_LABELS
value: k8s
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 1000m
memory: 8Gi Few changes I made here:
So, those are simply improvement suggestions for the future. For now as you can see I've been trying to keep it as simple as possible, but I still run into a issue. The runner starts and registers, but when using the job you provided I run into the following error returned by the job:
/Edit so the root cause seems to be somewhere here: https://github.com/actions/runner/blob/v2.314.0/src/Runner.Worker/Program.cs#L20 In addition I found that providing a runner config by mounting one and setting the |
I didn't got this this kind of error before (at least for a year)
Sounds like the message inside the container got trimmed before it reached the actions/runner. Based on the error the begin was sent to the actions/runner successfully Maybe some data specfic to your test setup might cause this. (even parts not in the repo are stored in the message) I would need to add more debug logging to diagnose this |
If you add the logging I can reproduce the issue if you like. My guess is that's it's maybe proxy related. But can't tell from the error logs. |
@omniproc you made changes via the deployment file that are not compatible with actions/runner k8s container hooks and I have no idea if using a deployment is possible.
the workspace cannot be an empty dir volume, like in my example files it is required to be a persistentvolumeclaim You can technically change the name of the pvc via
This led mkdir Would require an empty dir mount - mountPath: /data
name: data Maybe if I create that dir in the Dockerfile it would work without that as long your fs is read write The nightly doesn't have sudo anymore in the start.sh file, but it can still certainly break existing non k8s setups as of now.
I found a mistake in the python wrapper file, probably due to resource constaints to RAM has os.read read less than expected and shorten the message. I also added some asserts about return values of pipe communication + env Please try to use that nightly image it should get you to the point that you omited the persistentvolumeclaims of my example and kubernetes cannot start the job pod (also make shure to create an empty dir mount at /data/) |
Feature Description
The Gitea Actions release was a great first step. But currently it's missing many features of a more mature solution based on K8s runners rather then single nodes. While it's possible to have runners on K8s this currently requires DinD which has it's hole set of own problems, security issues (privileged exec required as of today) and feature limitations (can't use DinD to start another container to build a container image (DinDinD)). I know with buildx workarounds exist, but those are just that: workarounds.
I think the next step could be something like what actions-runner-controller is doing for GitHub actions. Basically a operator that is deployed on K8s and registers as runner. Every job it starts is then started in it's own pod rather then the runner itself. The runner coordinates the pods.
Related docs:
Screenshots
No response
The text was updated successfully, but these errors were encountered: