Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a better way to do this? #1

Open
osresearch opened this issue Mar 21, 2022 · 12 comments
Open

Is there a better way to do this? #1

osresearch opened this issue Mar 21, 2022 · 12 comments

Comments

@osresearch
Copy link
Owner

nix? linuxkit? dracut?

@CRTified
Copy link

Just stumbled across this repo in my GH feed. There is a nix-based example for doing the "build kernel+initramfs", although it targets the Nintendo 3DS: https://github.com/samueldr/linux-3ds.nix

Maybe that helps you as one more data point :)

@tlaurion
Copy link

tlaurion commented Apr 3, 2022

Guix!

An old dream was to have guix on top of any OS, to deal with declaration of dependencies and reproducibility:
https://guix.gnu.org/manual/en/html_node/Installation.html

Things got confused a bit and stalled upstream for Heads there: https://issues.guix.gnu.org/37466

But the idea was this: linuxboot/heads#927 (comment)

Since then, gnat is now part of nix. But not guix.
musl-cross is part of nix. But not guix.
@daym: any reason why?

To be frank, the whole OS bleeding in builds (as you noted here: linuxboot/heads#936) should not be a problem for building hosts if guix was used. If we know what bleeds, we could simply declare all of those (have them installed by guix/nix and poked by modules configure scripts and used) between releases. )

Having musl-cross under guix for a specific version would make it reproducible there, and we would depend on it just like any other package, on CI on a prepared docker image, on CI on top of another OS or locally.

By fixating the whole implied toolchain, nothing linked to the OS could bleed in builds if properly declared. And that is what guix is all about.

We could then focus only on build scripts, timestamps, relative path issues and libtool denial, like this project is doing. This project is going in the good direction and this is more then welcome, but the declarative parts and fixing the host bleeding in the builds (you struggled with differences between Debian and Ubuntu in your process!!) while others will continue to want OS xyz...

Can we go out of this and have a docker image at the end of this adventure, which users could reproduce (minus timestamps) simply because nix/guix are doing it already?


Some historical background from the past 2 years maintaining Heads:

Heads once was building coreboot with ANY_TOOLCHAIN, as all the other packages.
It seemed like a good idea at first, until the host tools got updated, and then Heads required more and more hacking to build things that worked before. New releases of the same OS changes the building behaviors. Some OS cannot build anymore, some can. No change in code, builds failing. To a point where its impossible, today, to rebuild a same commit that was once supposed to be reproducible back then, even if launching that old debian-8. On the same docker image, thanks to apt/dnf and all the others needing to download the latest version of a needed tool prior of being able to call it.

On the packages downloaded, zlib is a recent example: it disappeared from the origin in the last days! That required that the URL was changed, yesterday in the module definition. Someone wanting to build the codebase at a commit of one week ago would fail doing so.

Unless we cache packages of those URLs in repo (big no), or depend on a project who also cache them and refer to their packages for future proof reproducibility (better!), in time for a same commit, its a dead end long term, unless we move all those packages to be downloaded from a commit archive over github or other repository (if the project doesn't move).

The same problem arose as well on debian-9 and fedora-30, at one point in time, it was now required to start building local host tools for gawk and make to build what worked before. Tools evolve, everything change. We need to start changing how we do things as well.

In my opinion, guix would be the declarative solution that would fix most if not all of the problems. If at a point in time, we can declare exactly the build stack that was necessary to create an outcome ( and guix can be installed on top of any Linux system and packed in a docker image), then deploying the same docker image in between releases as referred in last release notes would resolve all the problems above.

That reproducible docker image, thanks to guix, could also contain a deployed cache of that buildstack, while also including a cache of the packages downloaded, to be able to reproduce on each audits points, that exact release. If nix, we could use the muslcross directly (NixOS/nixpkgs#182000 opened to have a ppc64le target to be able to build for power without having to package musl-cross-make anymore).

I think a lot less tinkering would be needed to "maintain" reproducibility in time and over time.

I like where the scripts of this project are going.

The main issue that comes to my head, right now, is that boards are also not maintained forever.

So for example, the kgpe-d16, or again the librem mini, are still bound to coreboot 4.11.

Building coreboot 4.11, with newer OSes, needed additional patches to be deployed to be able to continue to build those on newer OSes, including patching coreboot's own buildstack, requiring also to update some of their package dependencies, while fixing some error handling.

Again today, someone asked why builds were not working on arch....


I really think guix, combined with board's (declarative) pinning to kernel versions, coreboot versions and buildstack versions would permit to declare properly the prerequisites needed for reproducibility across OSes, and permit to be able to be able to build the same ROM/kernel/initrd/tools 10 years from now, if we do things properly this time. With heads builder here, with missing modules and initd packing of heads initrd directory, on top of guix, I think this is really going somewhere.

The whole problem of reproducibility could vanish once and for all, while not having to dig that much to have reproducible ROMs.
Of course, projects not respecting cross compiling would still need patching/upstream fixing. But for the rest...

@Thrilleratplay @daym @osresearch: thoughts?

@daym
Copy link

daym commented Apr 3, 2022

Yeah, this is the main thing Guix is known for: reproducible builds.

Full disclosure: I've been using Guix since 2017, and am working on GNU Mes for Guix for several years.

Since then, gnat is now part of nix. But not guix.

Gnat is an Ada compiler that is written in Ada and C. Bootstrapping problem.

I made known this problem to the Ada community at FOSDEM a couple years back, talked to AdaCore, the company, which then published some Ada parsers for Python for us to use (libadalang and langkit). Using that, I could write an Ada interpreter in two months or so and interpret the source code of Gnat-the-compiler with it. But it's probably using a lot of parts written in C.
There's also Ada/Ed. Maybe that could be used to bootstrap gnat--that would be less work.

Back then, I got side-tracked (bootstrapped Rust instead; now working with Rust for several years). There is no use case for me for gnat (or Ada) at this time.

musl-cross is part of nix. But not guix.

musl-cross has been pushed to guix master (but an old version now).
I just tried it, it works.

The WIP patch I posted (see your link above) does build all of Heads--but an old version now.

That said, Guix master prefers the GNU libc where possible. I doubt that they would do the work of having existing Guix packages build on musl themselves. That means you'd have to use musl-cross to build stuff. Otherwise if Guix was supporting musl natively, you could do guix build heads --with-input=glibc=musl@1.2.2 or guix build heads --with-c-toolchain=heads=gcc-musl-toolchain@9.4.0 instead and it would magically do the right thing.

Guix basically is a declarative package manager.

It always irked me that in Debian for example it's normal for you as a packager to install random packages into your host Debian OS (see the instructions in this linux-builder repo here), and then use dpkg-buildpackage -rfakeroot to build a binary package, which links to those random packages--and others that you might have installed before--to build the latter artifact. Not to mention the compiler :(

Obviously, that's weird (and I'm sure by now Debian has containerization tools for package builds too--I'm just not using it anymore, so I wouldn't know).

Instead, Guix makes a container per package build automatically, and then provides (only) the declared package dependencies in there. Bleeding of your environment filesystem is impossible. Bleeding of random numbers and timestamps of the environment is patched out of the source code in the build recipes on a case-by-case basis (there are automated tools that let you build a package however many times and autocompare the results). These rules are stored in the guix repository--after being manually found out by someone once.

The world, for some reason, uses manual solutions like Docker more--where you could make some shell scripts to simulate small parts of Guix (with some problems). But Guix is just better than a reasonable-effort manual solution--and most of GNU packages are there already. Furthermore, a lot of people keep updating it (it's rolling release with pinning ability) and thus even if you had your own shellscript-and-string solution, it would quickly get outdated in comparison. And the tooling in guix (to find the dependency graph of anything for example) is wonderful.

That said, there are limitations; @Thrilleratplay worked with it within Heads more than me and can tell you about some problems he encountered.

Having musl-cross under guix for a specific version would make it reproducible there, and we would depend on it just like any other package, on CI on a prepared docker image, on CI on top of another OS or locally.

Indeed, but you could (and guix users would) go one further and even pin guix to a specific version for a while (that includes all packages available).

fixing the host bleeding in the builds (you struggled with differences between Debian and Ubutu in your process!!)

Yeah, I was working around similar problems so often and I'm so glad this will never happen again for my stuff.

On the packages downloaded, zlib is a recent example: it disappeared from the origin in the last days! That required that the URL was changed, yesterday in the module definition. Someone wanting to build the codebase at a commit of one week ago would fail doing so.

Guix uses Software Heritage, an automated artifact archiving and retrieval system, to make this less of a problem. (Upload of source to software heritage is done on guix lint, as a side effect).

In my opinion, guix would be the declarative solution that would fix most if not all of the problems. If at a point in time, we can declare exactly the build stack that was necessary to create an outcome ( and guix can be installed on top of any Linux system and packed in a docker image), then deploying the same docker image in between releases as referred in last release notes would resolve all the problems above.

Indeed.

Again today, someone asked why builds were not working on arch....

Yeah, OS diversity is nice but not if it causes a maintenance nightmare. Better to standardize on something. Preferrably something that runs on top of whatever (fun fact, a few days ago, someone posted that he has Guix running on Windows Subsystem for Linux O_o).

I think build containerization is the way out here (I say that as someone who doesn't like the industry trend of using containerization for everything, even where it makes no sense). And then the question is are you going to do all the package and build recipe tracking yourself, or are you going to take part in a community that does this already.

In my opinion, building self-contained things like ROMs are the use case for Guix-on-top-of-another-distro.

Some the other stuff (GUI programs etc), ehh maybe you don't want 400 MiB of Gtk shared libraries once in your host Debian, and then again in Guix, to run gparted-compiled-by-Guix (which is how it would most likely be). There are downsides, and this kind of duplication of host vs build system is one of them.
But for ROMs? Yes please. This is exactly what I'd want there.

That said, Guix by default does not use the filesystem hierarchy standard (it uses hashes of all unpacked transitive dependencies as names in /gnu/store instead, so for example I have /gnu/store/afpgzln8860m6yfhxy6i8n9rywbp85cy-gcc-7.5.0/bin/gcc, /gnu/store/rn75fm7adgx3pw5j8pg3bczfqq1y17lk-gcc-7.5.0/bin/gcc and /gnu/store/z8954h4nvgxwcyy2in8c1l11g199m2yb-gcc-7.5.0/bin/gcc right now), so be aware of that.
That gnu store naming is mostly for deduplication.
It's not like the end user ever sees those (instead, there's a normal "bin" directory with automaintained symlinks somewhere, in the so-called "profile")--but packages that were built and refer to shared libraries refer to those /gnu/store/... by rpath.

Things got confused a bit and stalled upstream for Heads there: https://issues.guix.gnu.org/37466

It's a WIP patch, which means that patch will never be merged (after all, the idea is to get Heads' dependencies as actual packages into guix (mostly already there) and not to make one huge heads package magically unpacking and building everything in one build container total--the latter of which the WIP patch does :) ).

I could still argue and get a real patch included, but I wanted to wait for @Thrilleratplay 's buildstack guix channel to mature, instead of pushing an incompatible version to guix master. It could go in very fast if we wanted.

(The thing that really has to be in Guix proper, heads-dev-cpio, is in Guix)

@tlaurion
Copy link

tlaurion commented Apr 3, 2022

@daym

There is no use case for me for gnat (or Ada) at this time.

Coreboot requires the host to have gcc-gnat for its own bootstraping of its version specific build stack.

@osresearch @Thrilleratplay
I went on and on yersterday looking at differences between guix and nix for the current use case.

Reading @daym reply in this thread makes me realize that there are still some confusion on what we need to be reproducible on the host system, to be able to prepare a docker image for CI/user local builds reasons (and build time, not so much reproducibility, which is the desired outcome. Let me clarify).

@daym the Heads patch set proposed to guix included too much, in my opinion, while again, I may have mixed or have limited understanding of what was expected to accomplish with that patchset. From my reading, it was a complete duplication of the Heads project, instead of separating guix role into making Heads ROMs reproducible.

I want to focus here on what bleeds from the OS toolstack once again, as opposed to including Heads completely into Guix, including declaration of boards configuration, instead of having Heads declare which guix toolstack requirements. Makes sense?

What I understand that would be required from Guix is declaration of all the tools that bleeds into the ROM, not the modules built by Heads themselves?

From my understanding, we mix again two things here, which are specific to building coreboot as reproducible from their upstream position with the idea to build Heads (or linuxboot, others) faster. I think personally its an error.

The point I was making earlier is that if we want to collaborate with upstream projects, we should also comply with their processes. I think we would be doing an error by trying to build coreboot with a host built buildstack through ANY_TOOLCHAIN, since by doing so, we would break things (and they broke in the past).

What I'm trying to say, here, is that if for a specific coreboot version, we are pinning guix host tools that would otherwise be bleeding in the ROM, we should be fine.

A step back here. Currently, Heads is bootstrapping musl-cross-make to build all Heads modules AND to bootstrap each coreboot version specific buildstack. This SHOULD be the way to go(?), while taking more time to do the first bootstrapping of buildstack and having guix in docker.

Of course, building musl-cross-make takes time and space. Of course building multiple coreboot buildstacks is also taking a lot of space and time. But having CI caching those layers, or having a docker image containing pinned Guix host buikdstack tools deployed, and having those coreboot versions buildstack in the docker image, could be an easy and attainable goal, while the build scripts improvements as seen here, could leverage Guix pinning, per board configuration, and be able to build all boards depending on a specific coreboot version, also in parallel.


I think the minimal missing requirements under Guix right now would be to have ada and possibly musl-cross(-make?).

@osresearch : I'm really questioning going the way of building coreboot with ANY_TOOLCHAIN.

But having Guix pinning all the tools that could bleed into modules configure script would definitely fix the current problem space.

Or the problem space needs to be clarified.

That includes minimally ada, and then we could build musl-cross(-make), without it being part of Guix?

And then have the cross building environment required to build Heads in docker, depending on guix. Building with the present project scripts or Heads (bit rotten) make based build scripts.

@Thrilleratplay named the problem of having Guix have multiple architectures supported. Heads need to support multiple architectures, which musl-cross(-make) would still fix without much problem.

Again. My dream here would be to have a docker image containing guix pinned host tools, required to bootstrap musl-cross, and coreboot version specific buildstack environments. Currently, the biggest CircleCI cache layer is 5.9GiB, containing way too much to fasten rebuilds, but being downloaded from CI if existing to fasten rebuilds based on checksumming of modules definitions and patches, to be reused unless something changed.
This cache includes all the built modules , including multiple coreboot version caches of boards built dependencies and also kernel built directories: all boards built artifacts. That being outside of using Debian-11 docker image, installing tools needed there, not being part of the CircleCI cache. We want to replace the docker image here, in my opinion. But I want this idea to be challenged.

If that could be switched to having a docker image, being the counterpart of debian-11 docker image + host tools (guix) + built musl-cross(-make for 3 arch) + coreboot built stack for 3 architectures (i386, x86, ppc64) I think the docker image could be maintained across releases, also have the principle reused by coreboot-sdk, and then have CI caching layers facilitate the fast rebuilding of project modules being consistent between upstream project commits. For coreboot, that principle would probably be used right away, permitting users to add guix on top of their OSes, or docker the container to build coreboot easily for a specific coreboot version, right away.

I hope I emphasized enough on the different goals this project/heads tries to accomplish, what Guix could resolve easily, and what CI should continue to do to ease rebuild speedup (of small scripts changes, most of the time), while separating correctly what should be done on each side?

@daym
Copy link

daym commented Apr 7, 2022

Hmm, if we only want to use guix for the toolchain, that could be accomplished by building Heads or Linux or whatever with (working directory is your source code):

guix shell --container --network gcc-toolchain@version dependency-1@version dependency-2@version dependency-3@version -- make

And that should be all. The actual releasing&packing would work the same as it always had, no guix involved except for the toolchain (as long as you statically link). (Note: using --network is bad for reproducibility, at least if no extra steps are taken by the user--which is why thrilleratplay and I had guix download&verify stuff outside the container. But it's not like we have to do that)

It's also possible to store a file into version control (to basically document the build dependencies above). Invoke

guix shell --export-manifest gcc-toolchain@version dependency-1 dependency-2 dependency-3

and store stdout into your repo. To use it to build then, guix shell --container --network -m yourstdout.scm.

But that would mean the maintainership (including reproducibility) of crossgcc, elfutils, mpc, gcc-8, linux, coreboot-blobs, coreboot, msrtools, busybox, zlib, mbedtls, kexec-tools, flashtools etc would still stay in your scripts (the former build dependencies here are for building the coreboot cross compiler).

But with the support of guix channels, you could also just have all your packages as guix packages in your guix channel to begin with, including the ones you maintain (the list above). That would make Heads essentially a mini-Linux-distribution (for the Linux&initrd image). In my opinion, that's what it is anyway. But I understand if that's out of scope, no need to do that then.

osresearch/linux-builder also seems to do basically that--make a tiny Linux&initrd distro (at least with shared linking, that's what it always ends up with: tiny OS in initrd).

musl-cross is in guix master. If it wasn't, you could build it yourself anyway.

The part that worries me about the gnat compiler is that it probably has a big chunk (most of GCC) written in C, so it needs C--Ada-Interop in the bootstrapper, too.

@Thrilleratplay
Copy link

@tlaurion @daym @osresearch
Sorry for the late response. Without rehashing what has already been said, the main roadblock I ran into was trying to bootstrap ada/gnat. Around this time, coreboot decided to use a prebuilt toolchain built with Nix.

I think GUIX would be the ideal solution. It would allow for scriptable reproducible builds from packages or from source (except if bootstrapped) if desired at any point from the initial build until the sources are no longer available. I am not sure how much things have matured since I last tried but without an open source ada compiler this will likely be challenging.

@tlaurion
Copy link

Note: Trustix was referred by Nlnet: https://github.com/tweag/trustix
Particularly this blog post: https://www.tweag.io/blog/2020-12-16-trustix-announcement/

@aesrentai
Copy link

aesrentai commented Jul 19, 2022

Perhaps my this is where my inexperience comes into full view, but after reading through all of the (very informative) discussion it's difficult to see what advantages Guix offers over what countless other projects have done, just pinning specific debian package versions in docker. On https://github.com/Thrilleratplay/guix-docker I see:

building the same docker file will generate a different image based on the latest packages used in apt-get

and from head's #892 I see that the host pkg-config is used causing some bleeding problems but none of this should matter if we pin the exact same version of pkg-config (and every other important package) for everyone. We could even use the standard musl-gcc binary provided as part of the musl-tools package and not have to build a whole cross compilation toolchain for x86_64.

Supporting this, I just build heads using docker in the most recent PR in the heads repo on both Fedora 36 and Debian 11 and, except for differing GIT_HASH values in the initrd's /etc/config, all the hashes were the same. Notably, all the binaries in /bin, /lib, and /sbin as well as the kernel were identical, as expected.

In essence, as long as the docker image is not deterministic (which occurs whenever we apt-get install without pinning specific package versions) then the build will not be reproducible, but if the docker image is deterministic it shouldn't matter.

@Thrilleratplay
Copy link

@aesrentai What you are proposing solves reproducible builds "now" if all things are to be considered secure. A prebuilt docker image will not change and can be shared, coreboot does this with the coreboot-sdk. This fine until you want to audit the build environment years later.

The goal (and someone please correct me if I miss or misstate something) is to create a build environment that solves the "turtles all the way down" security fallacy. Here is the Wikipedia article If not familiar with the tale of infinite ingress. To put it more succinctly, how do you know the dependencies are safely built and without vulnerabilities? While you are able to audit the source of HEADS and deem it secure, you then are sending the code into a black box and receiving a binary file. While they have been discussed for decades, see "The Ken Thompson Hack", supply chain vulnerabilities are just now making news with SolarWinds hack. Additionally, over time, Debian packages are less and less available. Reproducing an version of Heads from 5 years ago with the same hash would be a headache. What platforms like Guix provide is the ability to build the entire environment from source if desired but also the same bit for bit prebuilt binaries that are created by the Guix build. The balance of trust and convenience are in the control of the end user.

I am trying to write this up quickly before heading to work and apologize if I glossed over anything. Does this explain what a Debian Docker image is not enough?

@tlaurion
Copy link

tlaurion commented Jul 19, 2022

The short version of all the above input is the following, and correct me if I'm wrong since funding is coming to fix this.

  • current linux-builder's goal is a bit broader then the current objective and is aimed at producing a general tool to produce a kernel and initrd. There is u-root and Heads as of now, used in different projects, which linux-builder goal is to provide a programmatic approach to let go of the complications of Make. Going with another build system would show other complications later on. What we need is a way to download and patch modules, generate cpio correctly aligned and pack things in a reproducible way there. The reason why there are so many patches here is again because of a reproducible toolstack to base ourselves on. Doing so or taking for granted it for granted and abstracting it would simplify linux-builder as well.
  • guix as of now cannot be used as a layer to be deployed on top of an OS nor can be used to create a guix only docker image because gnat is missing. This is ADA problem above. With that problem resolved (@daym) we could do little steps and use guix first as a buildstack and then eventually having heads into guix completely and even get rid of linux-builder. But that seems complicated to maintain as of now.
  • NixOS can be used as a layer on top of any Linux distribution to prepare an environment prior of jumping into it. Nix just like Guix have packages that are reproducible and as of today, no tools seem missing to be able to pack a docker image fixating the package list, and simply installing what is needed. On a maintainership perspective, if something breaks or new dependencies are needed to build Heads, it seems as simple as what @aesrentai is doing with Debian today: change the package list needed to create docker image, and then going back to CI to point to that new docker image revision. If a newer toolstack is needed, modify the package list pointer to a newer URL, let docker image build and go.
  • Doing this with Debian would require pinning packages versions, which is also a maintainership problem. As @Thrilleratplay said, the packages themselves will probably disappear as well. Again the best solution would be to use guix there because having the list of packages exported would permit anyone to rebuild all the dependency tree to build Heads and the build toolchain AND the modules to be built and packaged, including the ROMs would be reproducible without any worry. Here again linux-builder would become obsolete but seems like a lot of work.

@aesrentai we build musl-cross-make as of now but could build musl-cross. We still need to deploy libc.so that all modules depend on, and that for both x86_64 today and for Powerpc64le for Talos II (and other archs soon enough).

@aesrentai
Copy link

aesrentai commented Jul 20, 2022

First, thank you all for the detailed responses. The main concerns with a debian based docker image seem to boil down to:

  1. maintainability in the long to very long term
  2. auditability in the build environment

I'm somewhat confused about both of these worries. For instance, you can easily find debian binaries from over a decade ago and their corresponding source. Here is the source for gcc 4.9.2 originally shipped in 2016 and the associated binary. These are permalinks that will work into perpetuity, solving the long term maintainability problem.

To put it more succinctly, how do you know the dependencies are safely built and without vulnerabilities?

We can never know whether a binary is safely built in the sense that it does not have exploitable vulnerabilities, but I would argue that this doesn't matter. We can trust (because we can audit) the Heads source code and that of all modules-- we know they're not going to try to take over the host machine. We do have to trust that the Debian packages aren't maliciously built-- that is, that the binary will not try to backdoor whatever binary it is building-- which is not a worry since the packages are signed, and trusting the debian maintainers to not actively try to attack us is not a major security concession.

That said, I am not at all preferentially inclined towards debian, I just feels like the easiest and most maintainable solution. Alternatively, we can compile Nix, put that into a docker container, and build Heads using that. In other words, we abandon the idea of guix and just use Nix + docker, which has the same reproducibility guarantees but doesn't require us to hack another build system together. I'd be willing to code this up also.

@tlaurion
Copy link

I'm somewhat confused about both of these worries. For instance, you can easily find debian binaries from over a decade ago and their corresponding source. Here is the source for gcc 4.9.2 originally shipped in 2016 and the associated binary. These are permalinks that will work into perpetuity, solving the long term maintainability problem.

This is really pertinent and is a separate issue linuxboot/heads#1198

That said, I am not at all preferentially inclined towards debian, I just feels like the easiest and most maintainable solution. Alternatively, we can compile Nix, put that into a docker container, and build Heads using that. In other words, we abandon the idea of guix and just use Nix + docker, which has the same reproducibility guarantees but doesn't require us to hack another build system together. I'd be willing to code this up also.

I (but no expert here) would be more entitled into using NixOS with https://nixos.org/guides/towards-reproducibility-pinning-nixpkgs.html to create "old" NixOS docker images having older buildstack (debian-11 equivalent) first, with ADA (guix doesn't) in host buildstack being able to bootstrap coreboot musl own environement, and use NixOS old buildstack to bootstap musl-cross-make to build everything non-coreboot's-own-buildstack to go forward transitioning to linux-builder. If that docker image contained all currently required debian-11 installed applications, from a pinned package list to have needed applications installed in docker, we could move forward with baby steps in the goal of replacing Heads Makefile based system with linux-builder. Otherwise, the reproducible buildstack will stay in the way. There are more advanced solutions that were suggested in the past by Nlnet, but they do not fit the bill, needing orchestration. As far as to Heads interests, if two CIs were able to produce the same hashes and have those hashes reported being the same, the reproducibility problem would be resolved, and we could move forward into creating releases again.

As noted under linuxboot/heads#936 (comment) yesterday, the reproducibility issue situation is worse than I thought, with debian-11 building the same Heads commit 30 days apart causing busybox binary to be non-reproducible. This is a clear invitation to build our own docker image, reproducible would be definitely better, and this is where lowering maintenance cost would be important to go forward.


I came across this, which seems to be a good helper to build on top of:
https://github.com/teamniteo/nix-docker-base

Again, in Heads/coreboot context, the pinning for NixOS packages list should be as old as possible as a start to be able to build the oldest supported platform in tree. As of today, that would be kgpe-d16 and librem-mini (coreboot 4.11) but work is happening from 3mdeb's Dasharo project to have the platform supported in more recent coreboot. As of today, Heads is not building it in CI anymore because some race condition happened in the builds and were causing too much headache forcing CircleCI pipeline retried to have each commit builds for all platforms.

So reality as of now is coreboot 4.13, which debian-12 cannot build because of ADA errors. This can easily be resolved (At least in idea level) by bumping those boards to 4.17 if NixOS supporting older buildstack is an issue.

That is to say that there will always be challenges to have older platforms (older coreboot bootstrapped buildstack) built, but that should not be a show blocker, where patches can be applied prior of building coreboot buildstack, either by Heads/linux-builder when we are there.

Having a NixOS docker image PoC would be amazing.

From my not-so-advanced knowledge on the matter, it could even be a different docker image containing older NixOS buildstack to be able to build older coreboot versions, specified as such under CircleCI configuration for a specific board, if needs be.

Challenge me! This will go forward in the next months, and any insights by people having played in those waters would be awesome,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants