Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiarch docker build, outsource most docker commands to docker-helper script #26

Merged
merged 3 commits into from May 18, 2023

Conversation

illwieckz
Copy link
Member

@illwieckz illwieckz commented Nov 26, 2022

Multiarch docker build, outsource most docker commands to some scripts in docker/ folder.

Docker file now builds:

  • linux-amd64
  • linux-i686
  • linux-arm64
  • linux-armhf
  • windows-amd64
  • windows-i686
  • macos-amd64
  • vm

…docker-helpers/ folder

Docker file now builds:

- linux-amd64
- linux-i686
- linux-arm64
- linux-armhf
- windows-amd64
- windows-i686
- vm
@illwieckz illwieckz force-pushed the illwieckz/multiarch branch 4 times, most recently from ec6c75d to e76695c Compare November 26, 2022 04:31
@illwieckz
Copy link
Member Author

illwieckz commented Nov 26, 2022

After some thoughts I also enabled linux-armhf by default because if even @IngarKCT uses 32-bit arm on devices that can run the Dæmon engine, this is a good sign it's not rare to still use 32-bit on 64-bit-capable arm devices.

Also some of those arm devices only have proprietary OpenGL drivers for the 32-bit variant, so the ones who wants to run the proprietary driver instead of the open source one for whatever reasons are expected to run 32-bit arm.

@illwieckz illwieckz force-pushed the illwieckz/multiarch branch 3 times, most recently from 3b1df19 to 3a31d6a Compare November 27, 2022 00:47
Copy link
Contributor

@slipher slipher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach of putting several helper scripts in one file is unfortunate as it breaks caching. For example if you want to change something about build-external-dependencies, then you have to go and re-download everything from the package manager again because every step depends on the script. It would be better if each step is a different script.

If you implement the comment to put all APT steps together and put the not-very-complex git clone step back in the Dockerfile, then you would need just two helper scripts.

Dockerfile Outdated Show resolved Hide resolved
Dockerfile Outdated Show resolved Hide resolved
@illwieckz illwieckz force-pushed the illwieckz/multiarch branch 3 times, most recently from 1eca52a to 3943d87 Compare November 29, 2022 11:09
@illwieckz
Copy link
Member Author

illwieckz commented Nov 30, 2022

Ok, so I did a huuuge rewrite.

Now every binaries are built in docker, including macOS. The macOS build is done using Darling which is a kind of Wine but for implementing macOS compatibility layer above Linux.

I was hoping to do everything in a single simple Docker file like before but right now it doesn't look like easily possible.

Darling provides ready to use binaries… for Ubuntu focal and later, but we build everything else on Debian buster.
At first I hoped to be able to run an Ubuntu focal chroot in Debian buster docker but Darling doesn't work in a chroot.
So I had to do one Debian docker and one Ubuntu docker.

Then, running Darling doesn't work in docker without giving to docker some privileges that are usually not given. Unfortunately the options to give those privileges at docker build time are not available yet in stable build of Docker, so I had to split the making of the Darling docker into one step building the Ubuntu base layer then another step editing a container based on it using docker run then save it as an image again.

After that the tasks to build the release binaries are simply run (no caching), as I don't really see the need for it. Everything before building the release targets are cached, including installing Xcode and Homebrew in Darling, and building Linux external dependencies (that are built right after the repositories are cloned).

A single folder from the host is shared across the various docker images, meaning they all write the release targets in the same folder. Then a simple docker tasks can optionally be called to merge the unizip from those build targets.

Unfortunately I haven't got Darling-in-docker not running as non-root (yet?) because just setting docker run to run as non-root is not enough for Darling and I haven't found a way to run a docker build without root and then it would bring some conflicts, so I had to manually chown the macOS build in the docker since the output folder is now shared by the host.

Due to the extra complexity, Docker isn't meant to be called directly by users, I added the docker-build script to drive everything.

I updated the README with some documentation, basically one can just do that:

./docker-build --reference=af7e8ccf --targets=all --unizip

And get an unvanquished_0.zip unizip with all binaries we know how to build: linux, windows, macos, vm…

Built files are written in build/release/.

@illwieckz illwieckz force-pushed the illwieckz/multiarch branch 4 times, most recently from 581d41e to acfe650 Compare November 30, 2022 14:37
@slipher
Copy link
Contributor

slipher commented Nov 30, 2022

I'm concerned by the Linux supremacy here, so to speak. Before, I could run the Docker build on my Windows machine (with the "Docker Desktop" distribution; it uses the WSL infrastructure but you invoke it from the Windows command shells). Now the only permissible way to invoke it is through a driver script which assumes a Linux host. And regarding the Mac build, I can no longer produce and test a realistic release on a Mac machine, because MacOS is no longer the build environment used for the release.

P.S. does anyone else remember Unvanquished/Unvanquished#2200? 😆

@illwieckz
Copy link
Member Author

illwieckz commented Dec 1, 2022

I'm concerned by the Linux supremacy here, so to speak.

I can rewrite the docker-build script in Python if that's better. The other scripts are meant to be called within Docker so we don't need to make them non-Linux compatible.

On the specific topic of Windows compatibility for running the docker files, this just brings some questions to me:

  • do the docker mount bind (from host to docker) works on Windows? Even if that works there may be problems coming from the fact the docker build files with docker's user permission and the Windows file-system permission is a lot different from Linux.
  • do the privileged docker would work the same on Windows? I assume those privileges is to give more access to host things, probably Linux kernel features… Darling currently require unprivileged docker. Though Darling is known to work on WSL2, it would be interesting to know if that works in docker in WSL2, but that would require more extra layers on Windows…

@illwieckz
Copy link
Member Author

Wait, isn't running Linux containers in Docker requiring WSL2 anyway?

@slipher
Copy link
Contributor

slipher commented Dec 5, 2022

  • do the docker mount bind (from host to docker) works on Windows?

I don't think so

  • do the privileged docker would work the same on Windows?

No idea

Wait, isn't running Linux containers in Docker requiring WSL2 anyway?

I described the situation in #26 (comment)

@illwieckz
Copy link
Member Author

And regarding the Mac build, I can no longer produce and test a realistic release on a Mac machine, because MacOS is no longer the build environment used for the release.

I don't get this. It is still possible to build the mac release on macOS using the build-release script. And the fact the release Windows build are built on Linux/Mingw and not on Windows doesn't seem to be a problem. In fact the macOS build done on Darling is closer to a build done on macOS than a Windows build done on Linux is close to a Windows build done one Windows.

@slipher
Copy link
Contributor

slipher commented Dec 11, 2022

If you want to run build-release on Windows, it is indeed a significantly different environment from building on Linux. To get it to work with the MSYS bash requires a lot of testing and tweaking. Similarly the C compiler stuff maybe will work fine in Darling; the other parts of the release scripts are where I'd expect the most problems.

@illwieckz
Copy link
Member Author

illwieckz commented Dec 29, 2022

I rewrote the docker-build script using Python.

I also had to force Homebrew to use an older version of itself to still be compatible with the Xcode cli installable with xcode-select --install to avoid requiring users to register an appleid to manually download Xcode. Homebrew requiring a newer version of Xcode cli is something that happened since I opened that PR because newer version of cmake provided by Homebrew don't ship Catalina builds anymore and that would require a rebuild of cmake itself.

@illwieckz illwieckz force-pushed the illwieckz/multiarch branch 2 times, most recently from 2f5a039 to 74dbfd0 Compare December 29, 2022 05:51
@illwieckz
Copy link
Member Author

illwieckz commented Jan 3, 2023

@slipher would you be able to test this on Windows?

This would be a good start:

python3 docker-build --reference 85dee939 --targets linux-amd64

It should produce build/release/linux-amd64.zip

Edit: Building everything can be done this way:

python3 docker-build --reference 85dee939 --targets all --unizip

If docker is not in path, one can do:

python3 docker-build --docker <docker path> #

@slipher
Copy link
Contributor

slipher commented Jan 4, 2023

Honestly maybe it doesn't matter whether it runs on Windows because if I wanted to test something e.g. with a Linux static build, I would dig out the old version of the Docker script rather than trying to use this.

I could still try it on Windows if you really want.

@illwieckz
Copy link
Member Author

I kept the old Docker script so this PR only adds an alternative way of doing things without breaking the old way.

@illwieckz illwieckz merged commit 4b035aa into master May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants