Provides an Arch Linux package for the GitHub Actions remote runner.
I wanted to use GitHub Actions remote runner to access the GPU on my server. Because of the security risks of having a public runner instance, I put it inside a
systemd-nspawn container. I wanted a convenient way to install the GitHub Actions runner, so I decided to package it up.
First, clone the repository:
git clone https://aur.archlinux.org/github-actions.git
The build and install it:
cd github-actions makepkg -fi
Configure the daemon:
cd /opt/github-actions ./config.sh --token ...
Then start it up:
sudo systemctl enable --now github-actions
You might want to combine this with a container. To create a container (including some suggested packages):
pacstrap -c github-actions base ruby clang pkg-build vim sudo
Then, import it:
machinectl import-fs github-actions
$ cat /etc/systemd/nspawn/github-actions.nspawn [Network] Private=no VirtualEthernet=no [Files] TemporaryFileSystem=/tmp
sudo systemctl enable --now firstname.lastname@example.org
Attach to the container and install the github-actions package:
machinectl shell github-actions cd /tmp sudo -u nobody git clone https://aur.archlinux.org/github-actions.git cd github-actions sudo -u nobody makepkg -f pacman -U github-actions*.xz
Limit the memory consumption of your container to 2 GiB:
systemctl set-property systemd-nspawn@myContainer.service MemoryMax=2G
Limit the CPU time usage to roughly the equivalent of 2 cores:
systemctl set-property systemd-nspawn@myContainer.service CPUQuota=200%
To ensure that your disk is not consumed by badly behaving test or malicious code:
zfs create -o mountpoint=/var/lib/machines/github-actions -o quota=8G system/machines/github-actions
sudo machinectl shell github-actions
Exposing Nvidia GPU
Add the following to the
[Files] # Expose GPU: Bind=/dev/nvidia0 Bind=/dev/nvidiactl
Then, ensure the container unit can see the required devices:
sudo systemctl set-property email@example.com "DeviceAllow=/dev/nvidia0 rwm" "DeviceAllow=/dev/nvidiactl rwm"
Finally, ensure the driver is a requirement of the container:
sudo systemctl add-requires firstname.lastname@example.org nvidia-persistenced.service
You also may need the container to be running the same Linux kernel and drivers. For this, I use
nvidia-dkms on both the host and container.
You can test this setup using
nvidia-smi which should show the same output in both the host and container.
- Fork it
- Create your feature branch (
git checkout -b my-new-feature)
- Commit your changes (
git commit -am 'Add some feature')
- Push to the branch (
git push origin my-new-feature)
- Create new Pull Request
Released under the MIT license.
Copyright, 2019, by Samuel Williams.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.