Description
Discussed in today's Wasmtime meeting I'd like to open this issue on the topic of improving reliability of apt-get install
in CI. The background here is:
- We build release binaries for Wasmtime in older docker images for increased glibc compatibility
- Release binaries start from "blank" older docker images and must, for example,
apt-get install
packages likegcc
- This process of installation can fail, as seen on Tracking issue for failures to install packages on Ubuntu 16.04 #10891, and can sometimes fail quite a lot
- No concrete understanding of what's happening here, but hunch is that github actions runners got block-listed for awhile which meant that nothing worked.
There's three major ways we discussed of improving reliabilty here:
Use static linking for binaries
This would involve switching to *-musl
targets in Rust instead of using *-gnu
targets. While I at least personally know how to build static binaries on x86_64 I'm less certain that other architectures like s390x/aarch64/riscv64 are supported from the Rust side of things. This also comes with a downside of musl, for example, generally having poorer allocator performance than glibc. I'll note that I'm not too worried about binary size concerns with static linking in that the wasmtime
binary is already quite large due to including all the features and so adding in musl wouldn't add too much on top.
Use pre-built docker images instead of blank ones
If we were, for example, to use ghcr.io
then we could "simply" download the image and then run within that image, no package installation necessary. The assumption here is that downloading the image is probably more reliable than installing packages (especially if it's GitHub infrastructure already). The main downside with this (in my opinion) is orchestrating these images. We have a nice property today where if you want to change a docker image it can be done in the PR itself to this repository. Preserving this property, e.g. making it easy to change the images, can be somewhat difficult depending on the solution here.
For example one option to do this is to move all our docker images to a separate repository. This repository would presumably, for each push to main
, publish new docker images. This wouldn't happen too often but it would mean that changing something in this repository would require changes in two repositories and orchestrating that.
Another possible option would be to build/publish images from this repository but that would require some degree of finesse and fanciness in *.yml
CI configurations which at least I'm not personally certain how to do. (anyone else have experience with this?)
Use alternative package mirrors
Another option would be to expand the set of mirrors that are used by default for package installation. I believe that archive.ubuntu.com
is used by default, and I don't think any other mirrors are enabled by default. Personally I don't actually even know how to concretely configure this beyond knowing that it should be possible. So sort of like the pre-built images, does anyone have experience with this? Should we drop some "one liners" in our images to, by default, pull in more mirrors?