Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom CI build machines for "specialized build configurations" #202

Open
boris-kolpackov opened this issue Aug 1, 2022 · 0 comments
Open
Labels
enhancement New feature or request or improvements over current feature

Comments

@boris-kolpackov
Copy link
Member

Some packages may have additional "system requirements" (such as third party packages, services, etc) that may not be possible to provide as part of the standard CI service. Some specific examples that popped up:

  1. Vulkan SDK that is required in order to build/test certain imgui backends. This SDK comes from a third party and committing to installing and keeping it up to date on all the CI machines does not seem feasible. For background, see: Missing dependencies on bdep ci build2-packaging/imgui#10

  2. Database access that is required in order to run ODB (C++ ORM) tests.

The question is how to handle this.

One suggestion by @Klaim (copied from the above-mentioned imgui issue):

I was thinking that maybe a kind of submission system allowing requesting to check the package owner's CI instead of relying completely on the community CI would help? Something like: I've made a lib requiring something like fmod (the audio library), can't run it on the build2 CI so when I send my package for publishing I deactivate almost all CI checks but I set a flag or command that requests a package reviewer to come check my own public CI (probably set with github-actions), see if it's good enough. I'm not sure if that would be sufficient for publishing with reasonable security?

While this will probably be the easiest solution, I think it has serious drawbacks. The build2 CI service performs a really elaborate testing that involves looking into all kinds of corner cases and that I doubt anyone will be able to approximate using a generic CI service. In a sense, it's the "common reference point": if someone says they have an issue with package X on platform Y, the first thing we do is lookup the CI log for X on Y and try to understand what's the difference between the way the CI builds and the way the user builds. And because the build2 CI exercises all the common scenarios, in most cases we have something to compare to. It would be really unfortunate if we lost this ability. So I think this option should be basically the last resort and probably such packages should never be migrated to the stable section of the repository (or maybe we should invent a separate section, like untested).

So I think we should try hard to handle this as part of the build2 CI service. For some background (see The build2 Build Bot manual for details), the CI service consists of the controller that hands out the build tasks (currently the only controller implementation is brep) and a number of build bot agents (bbot) that periodically contact the controller offering to perform a build for the build configurations that they support. Currently agents are running on physical machines and the build configurations that they support correspond to the QEMU/KVM virtual machines (VMs) that are present on each physical machine.

So I think the first option would be for someone to run an agent on some hardware that offers "specialized" build configurations. For example, someone could create VMs for all the major platforms and compiler with the Vulkan SDK pre-installed. Then we would add this agent as trusted to cppget.org and the specialized build configurations that it offers as available. The result will be indistinguishable from the standard build configurations. While this is probably the cleanest way to do it, this approach is quite involved and would probably only be acceptable to organizations that can dedicate some hardware and people to this effort (for example, LunarG could provide this service for their Vulkan SDK if they were interested).

The next level would be for us (as in, the build2 project) to run the VMs on our hardware but for someone else to setup and maintain the VMs with the necessary stuff pre-installed. We would definitely be open to this for widely-used configurations, such as the Vulkan SDK. This is still quite a bit of effort and we could probably provide the VM templates to help with that though there are licensing issues for Windows and Mac OS.

A variant of the previous approach would be for someone to provide the fully automated setup scripts and for us to prepare the VMs ourselves. While this will sidestep the licensing issues, this will also place a higher burden on us, especially if the scripts have bugs and require debugging back-and-forth. But we are prepared to give this a try.

I guess the natural question to ask, given we have such scripts, is why not execute them on the fly during each build (similar to how it's done on generic CI services such as GitHub Actions). We could do this, and even one better: the VMs go through the once-off bootstrap process where we build the build2 toolchain, etc. So we could also run these extra install scripts as part of that process. We've tried that in the past with automatically installing Clang from LLVM's own apt-get repositories. However, the experience was quite flaky. Also, if installing the latest versions of such extra components, things become too "fluid" with stuff changing in subtle ways and at unpredictable point. So I think if we go this route, then changing to newer versions of these components should be a deliberate (and hopefully infrequent) action.

Perhaps we could have our own (git) repository of "actions" (extra install scripts) that build bots could automatically checkout and execute for certain build configurations during this bootstrap process.

@Klaim Klaim added the enhancement New feature or request or improvements over current feature label Aug 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request or improvements over current feature
Development

No branches or pull requests

2 participants