Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Heads buildstack directions, past and future #927

Closed
tlaurion opened this issue Dec 12, 2020 · 14 comments
Closed

[Discussion] Heads buildstack directions, past and future #927

tlaurion opened this issue Dec 12, 2020 · 14 comments

Comments

@tlaurion
Copy link
Collaborator

tlaurion commented Dec 12, 2020

@daym @Thrilleratplay @osresearch @flammit @MrChromebox

Historically, Heads is built with musl.
It then changed from musl to musl-cross-make for more portability of produced build toolchain.

Then with coreboot 4.12 being integrated for newer boards, CI broke for coreboot 4.8.1 boards.
Patches were integrated in each coreboot so coreboot is built against musl-cross-make, neglecting coreboot's own musl-cross, well tested for each release. That was good. Until boards tried to use coreboot's integrated measured boot patches and vboot+measured, since musl-cross-make toolchain doesn't build IASL nor gnat by default, and those boards failed compilation with alignment errors.

I think it was an error to move away of coreboot being built by its own maintained musl-cross stack and patches from each release.

I still think its not a bad idea to have all the other modules beside coreboot being built with musl-cross-make.
And I think the path we should take, let it be nixos or guix-buildstack or coreboot-sdk(really not convinced) should try to replace musl-cross-make toolchain (versions selectible inside of docker image if needed), but not necessarily replace coreboot's musl-cross. That base docker image could be reused with statements specifying changes of buildstack requirements if needed (those statements being declared in CI where/when needed), or the nixos/guix-buildstack layer could be deployed on top of any linux end-user chosen OS, where the CI could spin different base docker images, retrieve nixos/guix-builstack layer and CI statements clearly declare what is needed. In that way, Heads could finally leave host tools changes, which is the neglected part of reproducible builds where the moto is "If we can build the same outcome, we don't care about replicating the buildstack in a reproducible way". Here again under Heads, we see that it comes with a lot of historical problems with added checks required to be able to have the good make version, the good gawk version and the growing number of questions and issues opened by users thinking they can build Heads on top of their favorite OS and failing.

I'm attempting to revert those side changes attemtping to build the whole Heads ROM on top of musl-cross-make, that were made at time of coreboot 4.12 integration under Heads in the goal of being able to have vboot boards and vboot+measureboot boards and measured boot without VBOOT being built and be able to move on.

Local individual builds worked (functionalities untested) and i'm now building for all current and to be included boards under https://app.circleci.com/pipelines/github/tlaurion/heads/652/workflows/624bc858-1296-425a-82bd-9e875b3236e6/jobs/700 with my associated testing branch https://github.com/osresearch/heads/compare/master...tlaurion:9element_vboot_rebased?expand=1

Thoughts welcome.

@Thrilleratplay
Copy link
Contributor

@tlaurion The first thing that should be nailed down are the minimal requirements for reproducibility:

  • Limit only to building on x64 and x86 architectures?
  • What should be compiled from scratch vs assumed to be static and safe?
  • Which projects/organizations are to be trusted implicitly?
  • Can Docker containers be required to build Heads if it relies on Guix?
  • Should the focus be on build reproducibility or build compatibility?
  • Etc. (will add more as I think of them)

This may like overkill but give the number of times I have made incorrect assumptions, I think they should be stated and defined somewhere. Talking with @daym yesterday, I realized we were approaching a Dockerized Guix in two different ways. I was using the Guix package manager over Alpine, while he used Alpine as a base Docker image but replacing it with Guix. The direct quote from @daym was:

The problem is if an adversary compromises the build servers of alpine and replaces their gcc bootstrap binaries. Then the shell that will be built by these compromised gcc bootstrap binaries for alpine and used in the guix docker container can just replace guix and/or patch whatever resulting files that come out of guix-daemon, and you won't be able to tell--not even by reading all of Alpine's source code (!)

Which is possible but unlikely. This can also be said of any library used to compile Heads and reminds me of the Ken Thompson Hack. Where should the bounds of trust be?

While Guix can be applied on top of other distros, it is not pretty. Even if the user installs GuixSD or the Guix package manager, it would not be reproducible as variables like username and timezone will alter a build. This is fine if the user is not looking for reproducibility or cannot reproduce the hashes of the CI builds due to modifications they have made.

At this point, I am sold on using Guix and my suggestion would be to ultimately require Guix to build Heads. The details on how can be debated. Perhaps use the official GUIX VM running on QEMU to generate Docker environments. In Docker, Guix only commit hashed apps and utilities can build Heads. Any module that can be reproducibility built independently, can be publish in Guix or a Heads/coreboot special repo to be pulled instead of being rebuild from scratch each time or relying on CircleCI cache. Even if musl-cross is still used, this would not require the CI or end user to build it from scratch but allows them to if they want to. If all works as expected, any individual part or the build environment and modules will be reproducible as well as the final build but only the final build would need to be compiled.

@tlaurion
Copy link
Collaborator Author

tlaurion commented Dec 12, 2020

@Thrilleratplay : absolutely and thanks for your step back.

On the more practical level (needed now), my PoC was able to build all modules with musl-cross-make excluding coreboot, and build coreboot with coreboot's musl-cross per version maintained toolchain: https://app.circleci.com/pipelines/github/tlaurion/heads/656/workflows/79a8d811-f72f-4a91-bf4f-1499539587f4/jobs/707. I will test builds on x230 personally, while I invite others to report problems.

@MrChromebox @Tonux599 : do you see any problem reverting to 4012895#diff-18936189b28399cf48703d0c1ec1df33e57c559de2a12f4438be00e6813bdb68 ?

@Tonux599
Copy link
Contributor

[...] do you see any problem reverting to 4012895#diff-18936189b28399cf48703d0c1ec1df33e57c559de2a12f4438be00e6813bdb68 ?

@tlaurion As long as Coreboot's own toolchain still produces reproducible builds I think this is fine. Only question may be is if musl-cross-make starts throwing up errors with other modules in the future. But I guess that's a bridge to cross when we get there.

@tlaurion
Copy link
Collaborator Author

Small script to find non-reproducibility against local vs remote builds output:

user@x230-master:~/heads$ egrep '^[0-9a-f]{64}' build/x230-hotp-maximized/hashes.txt | while read line; do HASH_REF=$(echo $line|awk -F " " {'print $1'}); FILE_REF=$(echo $line|awk -F "/" {'print $NF'}); if ! grep -q "$HASH_REF" ~/QubesIncoming/Insurgo/hashes.txt; then echo "$FILE_REF doesn't match";fi; done
tools.cpio doesn't match
veritysetup doesn't match
cryptsetup doesn't match
cryptsetup-reencrypt doesn't match
busybox doesn't match
flashrom doesn't match
libcryptsetup.so.4 doesn't match
heads.cpio doesn't match
mount-external-storage doesn't match
initrd.cpio.xz doesn't match
gawk doesn't match

@Thrilleratplay
Copy link
Contributor

Small script to find non-reproducibility against local vs remote builds output

@tlaurion
Does ~/QubesIncoming/Insurgo/hashes.txt contain the remote hashes?

This brings up a good point. How should reproducible hashes be tracked and stored? Once there is a database of Heads build hashes, should similar script be added to automatically verify a local build to the remote hash or should the user verify themselves?

@tlaurion
Copy link
Collaborator Author

tlaurion commented Dec 13, 2020

Does ~/QubesIncoming/Insurgo/hashes.txt contain the remote hashes?
@Thrilleratplay : build/x230-hotp-maximized/hashes.txt was local where ~/QubesIncoming/Insurgo/hashes.txt were remote from CI.

The reproducibility of CI builds (required for LVFS upload and fwupd upgrades) should upload hashes.txt and firmware and confirm. Users are supposed to be able to have the same ROM hash, where hashes.txt shows where the difference are.

Above was unclean build. My surprise was that my local kernel and remote kernel matched.
@Thrilleratplay : Don't take my results seriously, but be welcome to reuse/adapt script to compare hashes! (this is my testing VM, where local scripts are also present.)

@tlaurion
Copy link
Collaborator Author

tlaurion commented Dec 14, 2020

Normally I do something in the lines of:
user@x230-master:~/heads$ tar zcvf archives.tar.gz $(find ./packages/ build/coreboot-4.8.1/util/crossgcc/tarballs/ | grep -v verify) && make BOARD=x230-hotp-maximized CPUS=2 real.clean && rm -rf build/* install/* crossgcc/* && tar zxvf archives.tar.gz && make BOARD=x230-hotp-maximized CPUS=2

but for the sake of retesting everything...Redoing cleanly, will take a while.

git user@x230-master:~$ git clone https://github.com/osresearch/heads heads2
user@x230-master:~$ cd heads2
user@x230-master:~/heads2$ tar xzvf ../heads/archives.tar.gz
user@x230-master:~/heads2$ make BOARD=x230-hotp-maximized CPUS=2

No idea how to play nicely with CircleCI API to get a board hashes.txt programmatically.

But lets suppose we can, that would then look like the following to compare the hashes of local vs CircleCI for the actual a81ae6e

wget https://277-64810881-gh.circle-artifacts.com/0/build/x230-hotp-maximized/hashes.txt -O remote_hashes.txt
egrep '^[0-9a-f]{64}' build/x230-hotp-maximized/hashes.txt | while read line; do HASH_REF=$(echo $line|awk -F " " {'print $1'}); FILE_REF=$(echo $line|awk -F "/" {'print $NF'}); if ! grep -q "$HASH_REF" ./remote_hashes.txt; then echo "$FILE_REF doesn't match";fi; done

Results to come and following issues to be opened, intuition is that #892 will show its consequences...

@tlaurion
Copy link
Collaborator Author

tlaurion commented Dec 14, 2020

@Tonux599 @Thrilleratplay @MrChromebox : comparing master build vs local build:

user@x230-master:~$ git clone https://github.com/osresearch/heads heads2
user@x230-master:~$ cd heads2
user@x230-master:~/heads2$ make BOARD=x230-hotp-maximized CPUS=2
user@x230-master:~/heads2$ wget https://277-64810881-gh.circle-artifacts.com/0/build/x230-hotp-maximized/hashes.txt -O remote_hashes.txt
user@x230-master:~/heads2$ egrep '^[0-9a-f]{64}' build/x230-hotp-maximized/hashes.txt | while read line; do HASH_REF=$(echo $line|awk -F " " {'print $1'}); FILE_REF=$(echo $line|awk -F "/" {'print $NF'}); if ! grep -q "$HASH_REF" ./remote_hashes.txt; then echo "$FILE_REF doesn't match";fi; done
tools.cpio doesn't match
flashrom doesn't match
gawk doesn't match

here: gawk should not be measured (host tool), flashrom is not reproducible(comprised inside of tools.cpio, which is consequently non-reproducible, making the whole ROM non-reproducible).

@tlaurion
Copy link
Collaborator Author

Please read #571 (comment)

@Thrilleratplay
Copy link
Contributor

Thrilleratplay commented Dec 28, 2020

As of right now, I am waiting to find the cause and solution to a build error in Guix (Guix issue 45165). As it seems to be related to a recent change in the Linux kernel, this may lead to a larger debate of the expectations in reproducibility.

Assuming this is just a standard bug. The current plan is as follows:

  • Based on a standard common Docker container, probably Alpine, the container will set up a version of Guix package manager running in a Docker container. Reproducibility will be tied to the checksum of the version of Guix being downloaded, as it runs in its own ecosystem, the underlying platform should not matter* (still to be verified). EDIT Jan-04-21: guix pull seems to break on different packages on different hosts. While Guix is not intended to be used in this fashion, builds being this fragile does not help with reproducibility. Additionally, it occurred to me that the Guix defaults to the architecture of the host and cross architecture compiling is not supported by all of the packages required for the 32bit x86 coreboot environment. Switching back to the original idea of using libvirt and qemu but need to create a VM from scratch; only the x64 qemu image is available from Guix.
  • With the ability to execute Guix package manager through Docker, guix pull --commit=<SOME_COMMIT_ID> will be called to set Guix to a specific package version definition. This will be followed by generating another Docker image by calling guix pack (docs) with a Guix manifest file listing all of the required packages with specific version numbers. The combination of pulling a specific commit and packing specific versions of applications should make the output reproducible at any time in the future* (still to be verified).
  • The resulting Docker image is to be similar to coreboot-sdk or the Docker image created in the CircleCI build. This image can, and should be, stored in a Docker registry like Gitlab or hub.docker.com to limit the calls to GNU's servers and reduce build time. If anyone wants to recreate this image, they can, but it will not be necessary* (still to be verified).
  • This image is then used in CI builds to build Heads and/or coreboot.* (still to be accepted by coreboot as it is just a plan at the moment).

Once the concept has been proven, it can then be made more efficient. Any library, such as musl-cross-make, can be packaged and published as a Guix package. By doing this, this creates a prebuilt version, like any other included Guix package but allows the user to rebuild from source if desired* (still to be verified). A separate build process using the same build environment would be needed to handle the building and packaging of Heads/coreboot specific packages but this can be done ad hoc during development and testing phase. The published package would be added to the pack phase and included in the resulting Docker image.

If all works as planned, the build environment will be bit for bit reproducible at any given time based on a pull commit id and a specific version controlled manifest. This should be a solid base to create deterministic Heads builds going forward* (still to be verified)

@tlaurion
Copy link
Collaborator Author

For prosperity and history, here is the PR that have set the bases of where we are now, removing the docker image we relied on and replacing it by a debian-10 one on which we build and reuse caches:
#457

@tlaurion
Copy link
Collaborator Author

tlaurion commented Jan 17, 2021

@Thrilleratplay @daym

  • Based on a standard common Docker container, probably Alpine, the container will set up a version of Guix package manager running in a Docker container. Reproducibility will be tied to the checksum of the version of Guix being downloaded, as it runs in its own ecosystem, the underlying platform should not matter* (still to be verified). EDIT Jan-04-21: guix pull seems to break on different packages on different hosts. While Guix is not intended to be used in this fashion, builds being this fragile does not help with reproducibility. Additionally, it occurred to me that the Guix defaults to the architecture of the host and cross architecture compiling is not supported by all of the packages required for the 32bit x86 coreboot environment. Switching back to the original idea of using libvirt and qemu but need to create a VM from scratch; only the x64 qemu image is available from Guix.

The libvirt + qemu path won't be helpful for CI builds. Any explanation on the fragility of deploying guix on top of other docker images?

Small steps I see:

  • Deploy debian-10 docker
  • Deploy guix on top
  • Configure desired guix build toolchain needed, know the paths
  • Cache guix related paths to be deployed on top of debian-10 docker image
  • Change global Heads Makefile statement if paths needs to be explicit (should be the same on each docker/Linux user's host systems right? Goal here would be to have a Building heads wiki page that reflects the CI building steps, to be reproducible or even have a script that does it all.)
  • Build against guix for all modules but coreboot (as of now). Build coreboot from coreboot's musl-cross per coreboot board's version (as of now, with ANY_TOOLCHAIN removed from coreboot configs since we do not rely on musl-cross-make anymore)
  • Build caches (as of now).

Any problem with that smaller steps approach?

@Thrilleratplay
Copy link
Contributor

@tlaurion The first step in this plan would be likely be manual and outside of the CI environment. The output of this first step would be a docker container that would be cached in a registry. This idea is to be have this step reproducible for those who want/need to reproduce it but not for every build, just like the debian-10 docker image. This predefined image with all of the build dependencies could be a replacement for the corebook-sdk image.

In theory, there is nothing wrong with the steps you have laid out. One possible issue is a similar Debian library/command and a Guix library/command may be installed but referenced in different scopes in make files. Thus far, my experience is that Guix is very finicky. I had an issue building one package and the suspected cause may be a change in the kernel of the HOST machine. A few days later, I realized I should be using the 32 bit libraries, i686-linux platform. I quickly found out that meason built packages do not allow cross building and build everything using 32bit starting by downloading 32bit Guix. I am still working through other build issues and that is just to do the first step of my original plan. @daym is still working on building a Heads packages but has run into yet more build issues.

@tlaurion
Copy link
Collaborator Author

#1661 Was merged, next steps is to have Heads depend on musl and coreboot buildstack provided by next and go forward implementing things the right way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants