Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use vcpkg for Linux CI #7316

Open
wants to merge 57 commits into
base: master
Choose a base branch
from
Open

Use vcpkg for Linux CI #7316

wants to merge 57 commits into from

Conversation

messmerd
Copy link
Member

@messmerd messmerd commented Jun 13, 2024

This PR moves the Linux CI builds away from our custom Ubuntu 20.04 images to a plain Ubuntu 20.04 runner with vcpkg.

  • Now using vcpkg for multiple Linux dependencies
    • Important dependencies can now stay up-to-date rather than several years old
    • There were issues with a few vcpkg packages, so I had to use system packages (for ALSA, JACK, portaudio, fltk, libgig, lv2, lilv, ...) or other solutions (for Qt) instead. Vcpkg could not build Qt without the build runner running out of memory, but for the other dependencies, the problem was that CMake could not find them or they caused linker errors.
  • Removes dependence on custom Docker images for Linux CI build
    • Uses plain Ubuntu 20.04 runner
    • Provides deps-ubuntu-20.04-gcc.txt as a living document of (almost) all required system dependencies
    • Makes getting started with LMMS development easier for new devs
  • Updates multiple dependencies including:
    • Qt: 5.12.8 --> 5.15.2 (all CI builds use 5.15.x now)
    • libsndfile: 1.0.28 --> 1.2.2 (enables opus and mp3 support)
    • fluidsynth: 2.1.1 --> 2.3.5
    • libsamplerate: 0.1.9 --> 0.2.2
    • ...
  • Updates qt5-x11embed submodule to fix deprecation warning which was treated as an error (Avoid deprecated QFlags constructor lukas-w/qt5-x11embed#6)
  • Fixed unused variable warnings in SampleDecoder.cpp which were treated as errors

I will try to do MinGW builds with vcpkg next.

vcpkg.json Show resolved Hide resolved
vcpkg.json Show resolved Hide resolved
vcpkg.json Show resolved Hide resolved
@messmerd
Copy link
Member Author

@DomClark Do you have any ideas why the Linux build is failing on the Configure step now?
It probably has to do with caching but I'm unfamiliar with how that works or how to even debug it.

Run ccache --zero-stats
Statistics zeroed
Error: Process completed with exit code 1.

@DomClark
Copy link
Member

DomClark commented Jun 13, 2024

Restoring the cache will create the build directory, so mkdir will fail. 2> /dev/null will discard any error message, but the exit code will still be non-zero. Try using mkdir -p instead.

Edit: according to the documentation, CMake will create the build directory if it doesn't exist, so you should be able to omit mkdir altogether.

@messmerd
Copy link
Member Author

@DomClark Thanks, that did the trick

@Rossmaxx

This comment was marked as off-topic.

@Rossmaxx

This comment was marked as off-topic.

Rossmaxx

This comment was marked as off-topic.

@JohannesLorenz
Copy link
Contributor

Two general thoughts about this:

  1. Someone recently said they like that we use containers in the CI because it makes it easier for them to just pull the container and directly compile LMMS - compared to an approach where they first need to install all the deps. However, I think this is not a problem - even if we change the CI build script here, the container still remains and can be used for debugging.
  2. If someone asks "How can I install LMMS on Ubuntu", with a container, this question could not have been answered easily. Now, with naming the explicit dependencies, we can now say: "Install these dependencies from vcpkg.json". The list is, of course, only valid for vcpkg, so users have to either also use vcpkg, or need to find the corresponding packages for their package manager. However, I think such problems can be solved differently - e.g. for Arch Linux, the packages themselves name their dependencies.

@messmerd messmerd mentioned this pull request Jun 22, 2024
@tresf
Copy link
Member

tresf commented Jun 23, 2024

Someone recently said they like that we use containers in the CI because it makes it easier for them to just pull the container and directly compile LMMS - compared to an approach where they first need to install all the deps. However, I think this is not a problem - even if we change the CI build script here, the container still remains and can be used for debugging.

I will always advocate against a container-first build in the LMMS project the additional complexity it adds to the environment has not been well-justified.

Lukas set this up and when I reached out to him privately about this in 2020, this is what he said:

BTW, I'm really unhappy myself with how the LMMS CircleCI Docker setup you linked to turned out. I mainly wrote it to offer low execution times while working around limitations within CircleCI. I was also hoping that the Docker images would help troubleshoot CI issues locally because it could remove the need to replicate the CI environment when you can just use the Docker image. The result is a pain to maintain though and I'd happily help move away from it if I had the time.

In my experience, troubleshooting build issues in a container are about twice as hard as build issues in an interactive VM or on a native machine so they must be weighed against the complexity to setup a build environment. In the case of LMMS, our AppImage requirement is an LTS Linux version, so this makes compiling these dependencies by hand -- e.g. vcpkg -- more attractive for features, libraries or versions of libraries that aren't available through the LTS Linux package manager. I don't believe I would recommend this technique to newcomers, but if we DO eventually start to recommend new developers to use vcpkg, it may make sense to find a way to re-use the ccache or a container for this to speed-up build times.

If someone asks "How can I install LMMS on Ubuntu", with a container, this question could not have been answered easily.

I think you mean "How can I install build LMMS on Ubuntu", which is very well defined here: https://github.com/LMMS/lmms/wiki/dependencies-ubuntu. When packages change, we should update this. It provides a quick, reproducible environment for aspiring developers without trying to troubleshoot container-related, or container-specific build issues.

"Install these dependencies from vcpkg.json"

The dependencies are listed here: https://github.com/LMMS/lmms/wiki/Compiling#libraries. If this isn't being maintained, we should probably replace it with a link to the appropriate build file.

Whether we keep them around should depend on those that prefer them to interactive VMs or native builds and how willing they are to continue helping maintain them.

@messmerd
Copy link
Member Author

Someone recently said they like that we use containers in the CI because it makes it easier for them to just pull the container and directly compile LMMS

I could update our Docker image so that it mirrors the same vcpkg + apt setup from this PR. That way people who use Docker and want it for their local builds can continue using it and have all of vcpkg's up-to-date dependencies.

@tresf One big concern I have about this PR is the cache size. We have to install a lot of APT packages and some of them take up a lot of disk space. The total cache size for the APT packages is 2.3 GB which is huge. And the cache size for the vcpkg packages is ~193 MB. We are already running low of cache space and this would only make it worse.

One potential solution is to update our Linux Docker image to use vcpkg and all of the same dependencies as this PR. This would require a copy of deps-ubuntu-20.04-gcc.txt and vcpkg.json in the lmms-ci-docker repo. Then I would simply change the container CI uses from ubuntu-20.04 back to ghcr.io/lmms/linux.gcc:20.04 keeping everything else the same. A single line change. This would effectively cache all the dependencies in the Docker image so we avoid bloating our GitHub Actions cache usage. The scripts for downloading APT and vcpkg dependencies would run as usual, but they'd see that the dependencies are already installed, and ccache should avoid caching them since they existed from the start.

Yes, this would mean going back to using a custom Docker image, BUT we should be able to switch back to plain ubuntu-20.04 any time we want with a single line change - just at the expense of a larger cache size (and needing to download/build all the dependencies the 1st time due to the cache miss). Docker would simply be our way of caching dependencies.

What do you think?

@messmerd
Copy link
Member Author

messmerd commented Jul 3, 2024

I removed caching of the system packages since they were contributing the most to the high cache usage.
Fortunately this didn't negatively affect the build times much and the cache size is much more reasonable now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants