Open
Description
Summary
I'm using the ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE in ARC, and trying to mount a configmap with various file modes. Providing the file mode in octal results in a pipeline error during Initialize Containers
part of my Job. Switching to decimal resolves the issue but is significantly more confusing.
BAD
---
apiVersion: v1
kind: PodTemplate
metadata:
name: runner-pod-template
labels:
app: github-actions-runner
runnnerName: my-github-runner-test
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
spec:
imagePullPolicy: Always
containers:
- name: $job
securityContext:
privileged: true
env:
- name: NODE_EXTRA_CA_CERTS
value: /usr/local/share/ca-certificates/ca.crt
- name: AWS_CONFIG_FILE
value: /.aws/config
- name: AWS_PROFILE
value: target_role
volumeMounts:
- name: github-server-tls-cert
mountPath: /usr/local/share/ca-certificates/ca.crt
subPath: ca.crt
readOnly: true
- mountPath: /.aws
name: aws-dir
- name: iam-config
mountPath: /.aws/config
subPath: config
readOnly: true
- name: iam-config
mountPath: /.aws/eks-credential-processrole.sh
subPath: eks-credential-processrole.sh
resources:
requests:
cpu: 100m
memory: "4Gi"
serviceAccountName: my-github-runner-test
securityContext:
fsGroup: 1001 # provides access to /home/runner/_work directory in ephemeral volume
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: gitlab
volumes:
- name: github-server-tls-cert
configMap:
name: my-cacert
items:
- key: ca.crt
path: ca.crt
- name: aws-dir
emptyDir: {}
- name: iam-config
configMap:
name: iam-config
items:
- key: config
path: config
# THIS DOESNT WORKS
mode: 0777
- key: eks-credential-processrole.sh
path: eks-credential-processrole.sh
# THIS DOESNT WORK
mode: 0555
GOOD
---
apiVersion: v1
kind: PodTemplate
metadata:
name: runner-pod-template
labels:
app: github-actions-runner
runnnerName: my-github-runner-test
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
spec:
imagePullPolicy: Always
containers:
- name: $job
securityContext:
privileged: true
env:
- name: NODE_EXTRA_CA_CERTS
value: /usr/local/share/ca-certificates/ca.crt
- name: AWS_CONFIG_FILE
value: /.aws/config
- name: AWS_PROFILE
value: target_role
volumeMounts:
- name: github-server-tls-cert
mountPath: /usr/local/share/ca-certificates/ca.crt
subPath: ca.crt
readOnly: true
- mountPath: /.aws
name: aws-dir
- name: iam-config
mountPath: /.aws/config
subPath: config
readOnly: true
- name: iam-config
mountPath: /.aws/eks-credential-processrole.sh
subPath: eks-credential-processrole.sh
resources:
requests:
cpu: 100m
memory: "4Gi"
serviceAccountName: my-github-runner-test
securityContext:
fsGroup: 1001 # provides access to /home/runner/_work directory in ephemeral volume
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: gitlab
volumes:
- name: github-server-tls-cert
configMap:
name: my-cacert
items:
- key: ca.crt
path: ca.crt
- name: aws-dir
emptyDir: {}
- name: iam-config
configMap:
name: iam-config
items:
- key: config
path: config
# THIS WORKS
mode: 292
- key: eks-credential-processrole.sh
path: eks-credential-processrole.sh
# THIS WORKS
mode: 365
Error Message
On the Initialize Containers step, i get the following error:
Error: Error: failed to create job pod: Pod "my-github-runner-test-s9klc-runner-96622-workflow" is invalid: [spec.volumes[4].configMap.items[1].mode: Invalid value: 555: must be a number between 0 and 0777 (octal), both inclusive, spec.containers[0].volumeMounts[6].name: Not found: "iam-config", spec.containers[0].volumeMounts[7].name: Not found: "iam-config"]
Error: Process completed with exit code 1.
Error: Executing the custom container implementation failed. Please contact your self hosted runner administrator.
I suspect the hook removes the 0
prefix in the octal notation causing the the actual pod yaml that the controller creates to provide an invalid value
Metadata
Metadata
Assignees
Labels
No labels