Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow for modifying var-run mount maximum size limit #2624

Merged
merged 1 commit into from
May 27, 2023

Conversation

opdude
Copy link
Contributor

@opdude opdude commented May 25, 2023

When running a large number of containers in a single workflow job (via docker-compose for example) the current 1M size of the var-run volume is too small. This commit adds a new dockerVarRunVolumeSizeLimit parameter that allows users to customize the limit of memory for the var-run volume allowing to run more containers than with the defaults.

fixes #2621

When running a large number of containers in a single workflow job (via `docker-compose` for example) the current 1M size of the `var-run` volume is too small. This commit adds a new `dockerVarRunVolumeSizeLimit` parameter that allows users to customize the limit of memory for the `var-run` volume allowing to run more containers than with the defaults.

fixes actions#2621
@mumoshu mumoshu force-pushed the fix-var-run-no-space-left branch from 49845b3 to 00cafe7 Compare May 27, 2023 02:34
Copy link
Collaborator

@mumoshu mumoshu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for submitting a detailed report and the fix @opdude - LGTM!

@mumoshu mumoshu merged commit 90ea691 into actions:master May 27, 2023
15 checks passed
@opdude opdude deleted the fix-var-run-no-space-left branch May 29, 2023 22:45
@newman-dani
Copy link

newman-dani commented Jun 4, 2023

I'm not sure the overwrite is working. I have tried to deploy stack of test runners with added parameter ("50M"), and I'm still getting 1M volume size.
image
@opdude @mumoshu

@mumoshu
Copy link
Collaborator

mumoshu commented Jun 5, 2023

@newman-dani Hey! Which version of ARC are you using? 😺

@newman-dani
Copy link

Hey @mumoshu, running the latest and greatest :)
image

@mumoshu
Copy link
Collaborator

mumoshu commented Jun 5, 2023

@newman-dani Thanks! We have not yet released any ARC release since this got merged 90ea691 so the latest version doesn't cover this yet.

You'd need to build your own version of ARC and use the chart from the HEAD of our main branch to give this a shot now!

@newman-dani
Copy link

@mumoshu, do you have any estimations when it will be released?

Thanks

@newman-dani
Copy link

newman-dani commented Jul 2, 2023

@mumoshu
I have rebuild the actions-runner-controller from main and the dockerVarRunVolumeSizeLimit still have no effect.

@Langleu
Copy link
Contributor

Langleu commented Jul 4, 2023

@newman-dani , just to add my 2 cents. Using a custom image with the master state + helm chart of master results in it working properly.

  - emptyDir:
      medium: Memory
      sizeLimit: 50M
    name: var-run
Filesystem      Size  Used Avail Use% Mounted on
overlay          95G  4.3G   90G   5% /
tmpfs            64M     0   64M   0% /dev
tmpfs            15G     0   15G   0% /sys/fs/cgroup
/dev/sda1        95G  4.3G   90G   5% /runner
tmpfs            48M   12K   48M   1% /run
shm              64M  8.0K   64M   1% /dev/shm
tmpfs            26G   12K   26G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs            15G     0   15G   0% /proc/acpi
tmpfs            15G     0   15G   0% /proc/scsi
tmpfs            15G     0   15G   0% /sys/firmware

Example RunnerDeployment:

apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: agents-n1-standard-8-netssd-preempt
spec:
  # replicas: 0  # (disabled for auto-scaling)
  template:
    spec:
      # increasing /run partition required for docker
      dockerVarRunVolumeSizeLimit: 50M

      labels:
      - n1-standard-8-netssd-preempt-quick

      nodeSelector:
        cloud.google.com/gke-nodepool: agents-n1-standard-8-netssd-preempt
      tolerations:
      - key: agents-n1-standard-8-netssd-preempt
        operator: Exists
        effect: NoSchedule

      serviceAccountName: gha-runner

      # https://hub.docker.com/r/summerwind/actions-runner/tags
      # renovate: datasource=docker versioning=docker
      image: summerwind/actions-runner:v2.305.0-ubuntu-20.04

      resources:
        requests:
          cpu: '7'
          memory: 25Gi
        limits:
          cpu: '7'
          memory: 25Gi

      ephemeral: true

      env:
      - name: DISABLE_RUNNER_UPDATE
        value: "true"

nevertheless, I agree a proper release would make things a lot easier!

@meirwah
Copy link

meirwah commented Jul 10, 2023

I have rebuild the actions-runner-controller from main and the dockerVarRunVolumeSizeLimit still have no effect.

any update on release ?

@newman-dani
Copy link

@Langleu oh, you used custom image for runner themself and not controller. Will try, thanks!

@Langleu
Copy link
Contributor

Langleu commented Jul 12, 2023

@newman-dani, I used a custom controller image based on the master branch + helm chart master changes.
For the runner, we're using summerwind/actions-runner, which I think is the default.

We're running this setup for a week now and it fixed all our /run docker related issues.

@newman-dani
Copy link

@Langleu , have you installed the controller using local copy of Helm chart release from main?

@Langleu
Copy link
Contributor

Langleu commented Jul 12, 2023

@newman-dani, essentially yes. Templated it, we throw kustomize on top and then applied with our changes.
Nevertheless, the important part is solely the CRD changes + using your custom image built from the main branch.
The diff between 0.27.4 and main are just the CRD changes to introduce the dockerVarRunVolumeSizeLimit

@newman-dani
Copy link

@Langleu , thanks for pointing to CRDS. I was able to make it work by changing the image in deployment and simply replacing CRDS. kubectl replace -f crds/ There's no need to use local Helm for it :)

irq0 pushed a commit to irq0/ceph that referenced this pull request Aug 24, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
irq0 pushed a commit to irq0/ceph that referenced this pull request Aug 24, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
irq0 pushed a commit to irq0/ceph that referenced this pull request Aug 24, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Remove once runner is updated and has enough space in /run.

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
0xavi0 pushed a commit to aquarist-labs/ceph that referenced this pull request Oct 5, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Remove once runner is updated and has enough space in /run.

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
0xavi0 pushed a commit to 0xavi0/ceph that referenced this pull request Oct 5, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Remove once runner is updated and has enough space in /run.

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
0xavi0 pushed a commit to aquarist-labs/ceph that referenced this pull request Oct 18, 2023
42 containers exhausts the action runners /run directory. See
actions/actions-runner-controller#2624

Remove once runner is updated and has enough space in /run.

Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Github Actions that run many containers using dockerd sidecar run out of space on /run volume mount
5 participants