Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using bootc install-to-filesystem #18

Open
cgwalters opened this issue Nov 28, 2023 · 65 comments
Open

using bootc install-to-filesystem #18

cgwalters opened this issue Nov 28, 2023 · 65 comments
Labels
area/should-be-bootc Bugs that will be fixed when we switch to using bootc

Comments

@cgwalters
Copy link
Contributor

This relates to #4

  • We had some general agreement to support bootc install-to-filesystem; this will help long term with things like Add opinionated container binding with podman containers/bootc#128
  • bootc install-to-filesystem should also grow support for being provided the base container image externally (e.g. cached in osbuild); we know this is needed for offline ISO installs too. This ties with the above for the lifecycle bound app/infra containers
  • We can't drop the osbuild/ostree stages because not every case will use bootc in the near future
  • Agreement that for the ISO/installer case any customization (embedded kickstarts, but also which installer) would likely live external to the container (blueprint or equivalent)
@cgwalters
Copy link
Contributor Author

bootc install-to-filesystem should also grow support for being provided the base container image externally

Digging in, this is messier than I thought. Still possible, but @ondrejbudai can you state more precisely the concern you had with having bootc install from the running container?

ISTM that in general going forward we'll want to support running images cached in the infrastructure, which will drive us towards using containers-storage most likely, as opposed to e.g. the dir transport. And if we do that, ISTM it's just simpler to keep bootc doing exactly what it's doing today in fetching from the underlying store as opposed to having something else push content in, right?

@achilleas-k
Copy link
Member

Just to clarify, because there are two ideas here that sound very similar but are probably unrelated:

  • The issue you're talking about is with the idea of having bootc run from outside the base container image when running install-to-filesystem. So the idea of having it do bootc install-to-filesystem <container ref> <filesystem path> (or bootc install-to-filesystem oci-archive:/path/to/container.tar /path/to/tree for example), from a host machine would require too much work.
    • This is as opposed to running podman run -v/path/to/tree:/target <container ref> bootc install-to-filesystem /target, from the host, which is how it currently works.
  • This issue is not related to running bootc from a container that is in an "offline" storage format like an archive, right? So we can still do podman run -v/path/to/tree:/target oci-archive:/path/to/container.tar bootc install-to-filesystem /target? Which will probably work fine in osbuild. My concern, as we discussed yesterday, is that we're putting a few too many layers of containers/namespaces here which make it hard to predict some details, but might be okay. I think it's time we actually tried this and see what we get.

If I'm understanding everything correctly (and if I'm remembering everything from yesterday's conversation), @ondrejbudai's idea to mount the container and run it in bwrap is the alternative to this, but like you said, bootc wont like that as it makes some container-specific assumptions.

@ondrejbudai
Copy link
Member

I would actually combine #1 with mounting the container.

  1. Mount the container and chroot into it (in osbuild terms, construct a buildroot by "exploding" the container, and use this a build pipeline for the following steps)
  2. Partition a disk file using tools from inside the container
  3. Mount the disk file to /target
  4. Somehow get the container image in the oci format to e.g. /source/container.tar
  5. Run bootc install-to-filesystem --source oci-archive:/source/container.tar --target /target

Note that I do have a slight preference for passing a whole container storage instead of an oci archive.

@cgwalters
Copy link
Contributor Author

Just to level set, this today is sufficient to generate a disk image:

$ truncate -s 20G /var/tmp/foo.disk
$ losetup -P -f /var/tmp/foo.disk
$ podman run --rm --privileged --pid=host --security-opt label=type:unconfined_t quay.io/centos-bootc/fedora-bootc:eln bootc install --target-no-signature-verification /dev/loop0
$ losetup -d /dev/loop0

@cgwalters
Copy link
Contributor Author

Backing up to a higher level, I think there are basically two important cases:

  • Generating a disk image from a container image stored in containers-storage: notably this is the most obvious flow in podman-desktop on Mac/Windows. Copying that into a dir or oci-archive is just an unnecessary performance hit.
  • Generating a disk image from a container in a remote registry: this will happen in many production build flows. It seems simplest then if we try to unify this with the first case by always pulling into containers-storage, right?

@cgwalters
Copy link
Contributor Author

Also containers/bootc#215 can't work until bootc-image-builder starts using bootc.

@achilleas-k
Copy link
Member

Backing up to a higher level, I think there are basically two important cases:

* Generating a disk image from a container image stored in `containers-storage`: notably this is the most obvious flow in podman-desktop on Mac/Windows.  Copying that into a `dir` or `oci-archive` is just an unnecessary performance hit.

Which phase of the build is this referring to? If it's about having the stage in osbuild use the host containers-storage directly, I think the performance hit isn't entirely unnecessary but gives us the caching and reproducibility guarantees that we get with osbuild. These aren't directly relevant to the current use case (running it all in an ephemeral container), but I'm also thinking about the whole disk image built use a case more generally (using the same code and flow in the service).
Or is this just about having the osbuild containers cache be itself a containers-storage? That's definitely an idea I'd like to explore.
If we're talking about having a convenient way of using the host's containers-storage in the bootc-image-builder container, I think that's a lot simpler.

* Generating a disk image from a container in a remote registry: this will happen in many production build flows.  It seems simplest then if we try to unify this with the first case by always pulling into `containers-storage`, right?

Generalising any solution to both cases would be preferable, I agree.

@achilleas-k
Copy link
Member

the caching and reproducibility guarantees that we get with osbuild

Thinking about this a bit more, I realise my hesitation is mostly around modifying the caching model substantially but now I'm thinking there's a good way to do this with a new, different kind of source. A containers-storage source could use the host container storage as its backend directly and pass it through to the stage.

The one "unusual" side effect would be that osbuild would then have to pull a container into the host machine's containers-storage, which I guess is fine (?). But what happens if osbuild, running as root, needs to access the user's storage? What if it writes to it?

@cgwalters
Copy link
Contributor Author

But what happens if osbuild, running as root, needs to access the user's storage? What if it writes to it?

One thing that can occur here is that a user might be doing their container builds with rootless podman; so when they want to go make a disk image from it we'd need to copy it to the root storage. Things would seem to get messy to have a root process with even read access to a user storage because there's locking involved at least.

@achilleas-k
Copy link
Member

so when they want to go make a disk image from it we'd need to copy it to the root storage

I think this makes sense. I'd want to make it explicit somehow that osbuild is doing this. It's one thing to write stuff to a system's cache when building images with osbuild (or any of IB-related projects), it's another thing to discover that your root container store now has a dozen images in it from a tool that some might think of as unrelated to "container stuff".

@achilleas-k
Copy link
Member

Pinging @kingsleyzissou here since he's working on this.

@cgwalters
Copy link
Contributor Author

Which phase of the build is this referring to? If it's about having the stage in osbuild use the host containers-storage directly, I think the performance hit isn't entirely unnecessary but gives us the caching and reproducibility guarantees that we get with osbuild.

I'm not quite parsing this (maybe we should do another realtime sync?) - are you saying using containers-storage is OK or not?

Backing up to a higher level, I think everyone understands this but I do want to state clearly the high level tension here because we're coming from a place where osbuild/IB was "The Build System" to one where it's a component of a larger system and where containers are a major source of input.

I understand the reasons why osbuild does the things it does, but at the same time if those things are a serious impediment to us operating on and executing containers (as intended via podman) then I think it's worth reconsidering the architecture.

These aren't directly relevant to the current use case (running it all in an ephemeral container), but I'm also thinking about the whole disk image built use a case more generally (using the same code and flow in the service).

It's not totally clear to me that in a service flow there'd be significant advantage to doing something different here; I'd expect as far as "cache" fetching images from the remote registry each time wouldn't be seriously problematic. For any cases where it matters one can use a "pull-through registry cache" model.

Or is this just about having the osbuild containers cache be itself a containers-storage? That's definitely an idea I'd like to explore.

That seems related but I wouldn't try to scope that in as a requirement here. Tangentially related I happened to come across https://earthly.dev/ recently which deeply leans into that idea. At first I was like the "Makefile and Dockerfile had a baby" was kind of "eek" but OTOH digging in more I get it.

@LorbusChris
Copy link

Backing up to a higher level, I think there are basically two important cases:

* Generating a disk image from a container image stored in `containers-storage`: notably this is the most obvious flow in podman-desktop on Mac/Windows.  Copying that into a `dir` or `oci-archive` is just an unnecessary performance hit.

* Generating a disk image from a container in a remote registry: this will happen in many production build flows.  It seems simplest then if we try to unify this with the first case by always pulling into `containers-storage`, right?

Coming from the OpenShift/OKD side, I think ideally the tool for ostree container to disk image conversion can be run independently of osbuild, i.e. it can also be wrapped by other pipeline frameworks such as prow, tekton, argo workflows, and even jenkins for any kind of CI/CD or production build.

Agreeing on keeping the container images in containers-storage everywhere seems fine to me.

@LorbusChris
Copy link

LorbusChris commented Dec 6, 2023

@achilleas-k it sounds with using an alternative root for the ostree container storage (with containers/bootc#215) your concerns regarding all the images getting pulled into the machine's main container-storage might be addressed? IIUC, the ostree container-storage could be kept completely separate and e.g. live on a volume that gets mounted during the pipelinerun.

@achilleas-k
Copy link
Member

Sounds like a good solution yes.

@achilleas-k
Copy link
Member

achilleas-k commented Dec 6, 2023

Which phase of the build is this referring to? If it's about having the stage in osbuild use the host containers-storage directly, I think the performance hit isn't entirely unnecessary but gives us the caching and reproducibility guarantees that we get with osbuild.

I'm not quite parsing this (maybe we should do another realtime sync?) - are you saying using containers-storage is OK or not?

Well, at the time when I wrote this I was thinking it might be a problem but in my follow-up message (admittedly, just 5 minutes later) I thought about it a bit more and changed my mind.

Backing up to a higher level, I think everyone understands this but I do want to state clearly the high level tension here because we're coming from a place where osbuild/IB was "The Build System" to one where it's a component of a larger system and where containers are a major source of input.

I agree that this tension exists and it's definitely good to be explicit about it. I don't think the containers being a source of input is that big of an issue though. The containers-store conversation aside (which I now think is probably a non-issue), I think a lot of the tension comes from osbuild making certain decisions and assumptions about its runtime environment that are now changing. There was an explicit choice to isolate/containerise stages that are (mostly) wrappers around system utilities. Now we need to use utilities (podman, bootc) that need to do the same and it's not straightforward to just wrap one in the other. For example, right now, our tool is started from (1) podman, to call osbuild which runs (2) bwrap to run rpm-ostree container image deploy .... Replacing that with bootc requires starting from (1) podman to call osbuild which will run (2) bwrap to call (3) podman to run (4) bootc, and bootc will need to "take over" a filesystem and environment that is running outside of (3) podman.

I understand the reasons why osbuild does the things it does, but at the same time if those things are a serious impediment to us operating on and executing containers (as intended via podman) then I think it's worth reconsidering the architecture.

At the end of the day we can do whatever's necessary. The architecture is the way it is for reasons but those reasons change or get superseded. I think a big part of the tension is coming from me (personally) trying to find the balance between "change everything in osbuild" and "change everything else to fit into osbuild" (and usually leaning towards the latter because of personal experience and biases). Practically, though, the calculation I'm trying to make is which point between those two gets us to a good solution faster.

This is all to say, the source of the containers in my mind is a smaller issue to the (potentially necessary) rearchitecting of some of the layers I described above. We already discussed (and prototyped) part of this layer-shaving for another issue, and I think this is where we might end up going now (essentially dropping the (2) bwrap boundary).

These aren't directly relevant to the current use case (running it all in an ephemeral container), but I'm also thinking about the whole disk image built use a case more generally (using the same code and flow in the service).

It's not totally clear to me that in a service flow there'd be significant advantage to doing something different here; I'd expect as far as "cache" fetching images from the remote registry each time wouldn't be seriously problematic. For any cases where it matters one can use a "pull-through registry cache" model.

I wasn't trying to suggest we wouldn't cache in the service. I just meant to say that, if we tightly couple this particular build scenario to having a container store, we'd also have to think about how that works with our current service setup. But I might be overthinking it.

Or is this just about having the osbuild containers cache be itself a containers-storage? That's definitely an idea I'd like to explore.

That seems related but I wouldn't try to scope that in as a requirement here.

Given the comments that came later in this thread, I think I have a much clearer picture of what a good solution looks like here.

@cgwalters
Copy link
Contributor Author

cgwalters commented Dec 8, 2023

I'm working on ostreedev/ostree#3114 and technically for the feature to work it requires the ostree binary performing an installation to be updated. With the current osbuild model, that requires updating the ostree inside this container image in addition to being in the target image. With bootc install-to-filesystem, it only requires updating the target container.

@achilleas-k
Copy link
Member

@ondrejbudai and I (mostly Ondrej) made a lot of progress on this today. There's a lot of cleaning up needed and we need to look into some edge cases, but we should have something to show (and talk about) on Monday.

@ondrejbudai
Copy link
Member

podman run --rm --privileged --pid=host --security-opt label=type:unconfined_t quay.io/centos-bootc/fedora-bootc:eln bootc install --target-no-signature-verification /dev/loop0

While running this command in osbuild should be possible, it means that we have a container inside a container, which seems needlessly complex. Thus, we tried to decouple bootc from podman. The result is in this branch: containers/bootc@main...ondrejbudai:bootc:source

I was afraid that it would be hard, but it actually ended up being quite simple and straightforward. We also have a PoC with required changes to osbuild, new stages and a manifest. Note that this also needs osbuild/osbuild#1501, otherwise bootupd fails on grub2-install.

The most important thing that this branch does is that it adds a --source CONTAINER_IMAGE_REF argument. When this argument is used, bootc no longer assumes that it runs inside a podman container. Instead, it uses the given reference to fetch the container image. It's important to note that bootc still needs to run inside a container created from the given image, however that's super-simple to achieve in osbuild.

If we decide to go this way, using bootc install-to-filesystem in bootc-image-builder seems quite straightforward. We are happy to work on cleaning-up the changes required in bootc and adding some tests to the bootc's CI in order to ensure that --source doesn't break in the future.


We think the the method above is acceptable for osbuild. However, it's a bit weird, because all the existing osbuild manifests build images in these steps:

  1. Prepare the file tree
  2. Create a partitioned disk
  3. Mount it
  4. Copy the file tree into the disk
  5. Install the bootloader

Whereas with bootc install-to-filesystem --source, it becomes:

  1. Create a partitioned disk
  2. Mount it
  3. Install everything

This has pros and cons: There's less I/O involved (you don't need to do the copy step), but the copy stage isn't actually something that's taking too much time in comparison with other steps. The disadvantage is that you cannot easily inspect the file tree, because osbuild outputs just the finished image. This hurts our developer experience, because when debugging an image, you usually want to see the file tree, which osbuild can easily output if use the former flow.

Upon inspecting bootc, it might not be that hard to split bootc install-to-filesystem into two commands:

  1. Prepare the file tree
  2. Install the bootloader and finalize the partitions

Then the osbuild flow might just become:

  1. Call bootc prepare-tree
  2. Create a partitioned disk
  3. Mount it
  4. Copy the file tree into the disk
  5. Call bootc finish-disk

This would probably mean some extra code in bootc, but it might be worth just doing that instead of paying the price in osbuild and harming its useability. Note that nothing changes with the way how currently bootc is used in the wild.

@cgwalters wdyt?

@dustymabe
Copy link

Note that this also needs osbuild/osbuild#1501, otherwise bootupd fails on grub2-install.

glad I could help, and at the right time too :)

@mvo5
Copy link
Collaborator

mvo5 commented Dec 14, 2023

Fwiw, I am working on extracting the "container as buildroot" parts of osbuild/osbuild@main...ondrejbudai:osbuild:bootc in https://github.com/osbuild/images/compare/main...mvo5:add-container-buildroot-support?expand=1 so that it can be used in boot-image-builder (still a bit rought in there ). It would also fix the issue that we cannot build stream9 images right now (which is the main intention of this work but it's nice to see that it seems like it's generally useful).

@cgwalters
Copy link
Contributor Author

The result is in this branch: containers/bootc@main...ondrejbudai:bootc:source

First patch is an orthogonal cleanup, mind doing a PR with just that to start?

Then another PR with the rest?

This hurts our developer experience, because when debugging an image, you usually want to see the file tree, which osbuild can easily output if use the former flow.

But...the file tree is already a container which you can inspect with podman run etc. right?

@cgwalters
Copy link
Contributor Author

bootc install-to-filesystem --source

BTW just a note, this approach will require ostreedev/ostree#3094 in the future because we already have problems with the fact that ostree (and in the future, bootc) really want to own the real filesystem writes and osbuild is today not propagating fsverity.

cgwalters added a commit to cgwalters/bootc that referenced this issue Dec 14, 2023
It may be that we're involved via a container flow where
e.g. `/tmp` is already "properly" set up as a tmpfs.

In that case we don't need to do a dance in retargeting.

xref osbuild/bootc-image-builder#18 (comment)
cgwalters added a commit to cgwalters/bootc that referenced this issue Dec 14, 2023
It may be that we're involved via a container flow where
e.g. `/tmp` is already "properly" set up as a tmpfs.

In that case we don't need to do a dance in retargeting.

xref osbuild/bootc-image-builder#18 (comment)

Signed-off-by: Colin Walters <walters@verbum.org>
@cgwalters
Copy link
Contributor Author

I don't see how this would work any other way. What's the alternative? Again, if it's going to be creating the disk, it needs to know what to create.

AIUI today there is no "dynamism" in osbuild manifests, this is what you were getting at above.

If osbuild had a capability to pass dynamic information between stages (AIUI doesn't exist?) then we could still skip fetching the container image on the "controller" (not sure what it's called) which would generate the manifest, except org.osbuild.fstab would have

              {
                "label": "root",
                "vfs_type": $env.container_root,
                "path": "/",
                "freq": 1,
                "passno": 1
              },

Maybe a good analogy is with github action step contexts where later steps can access something dynamically set by an earlier one. (Again something similar may exist in osbuild and I don't know)

@achilleas-k
Copy link
Member

That just seems like a bad idea that violates very core principles of the osbuild manifest. But more importantly, I don't see why it's necessary. A manifest is baked, frozen. The dynamism happens in the osbuild manifest generator (like bootc-image-builder, osbuild-composer, or osbuild-mpp). The vfs_type should be known at manifest creation time. If it's not, then why not? We're designing this system and we can make these things easy on us and the system more robust.

@cgwalters
Copy link
Contributor Author

cgwalters commented Jan 22, 2024

It is known, it's just known in the container image digest, right? Another way to say this is the system is still reproducible; it's not random

@achilleas-k
Copy link
Member

It's not random but it's more brittle and less debuggable. The actions of each stage become less localised, harder to predict. I mean you're right, that stage, with a variable in it will behave deterministically with a given container as input. But the more that's left up to the runtime of the build to discover the less stable the build system becomes.

@cgwalters
Copy link
Contributor Author

I think there's some reasonable difference of opinion here...I have quite simply not had "debug osbuild manifest" as any kind of load-bearing part of my workday until recently, and it's quite possible that if I knew more about it I could appreciate the stability you reference.

What concerns me FAR more than the nuances of this debate is that inherently with a container flow we are blasting open the doors for what can be done in "the build system" (i.e. in a Containerfile without osbuild or anything else having any say in what can be done) - and a good chunk of that stuff will only fail dynamically, long after we've made a disk image that has a perfectly fine manifest from osbuild's PoV.

#102 was quite an illuminating (to me) recent example - to me it's obvious that one can't just podman create inside a podman (container) build! I mean maybe we should try to make that work...it'd be an exciting new thing alongside quadlets and pods and hand-crafted systemd units that ExecStart=podman run etc. But just starting from the semantics of nesting container images alone, not to mention the fact that all the containers/storage bolt.db and JSON stuff is totally unprepared for "merge semantics" on upgrades...

(OK that one did happen to fail at deployment time but for a probably obscure reason, there's tons of other dynamic-only stuff)

Anyways so in the very short term though whatever gives us baseline configurability of the root filesystem type that osbuild is happy with (which I guess is just a label) then...OK. What leaves me a bit uncertain is that that just doesn't seem to work long term with having the container image drive things at a much more significant level around things like the larger picture partition layout. But again we don't need that right now so...if we end up with just a special case for the rootfs and circle back to some non-label mechanism for other things later, that's fine.

@cgwalters
Copy link
Contributor Author

cgwalters commented Jan 29, 2024

I think someone said there was some WIP code for this, is that true? If so, where is it?

@mvo5
Copy link
Collaborator

mvo5 commented Jan 30, 2024

I think someone said there was some WIP code for this, is that true? If so, where is it?

We are working on this in osbuild/osbuild#1547

@cgwalters
Copy link
Contributor Author

Ah sorry of course, I was looking in osbuild/images.

@mvo5
Copy link
Collaborator

mvo5 commented Jan 30, 2024

Ah sorry of course, I was looking in osbuild/images.

No worries - there is a draft for osbuild/images as well in osbuild/images#412

cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Feb 14, 2024
See containers/bootc#294
This is particularly motivated by CentOS/centos-bootc-dev#27
because with that suddenly `dnf` will appear to start working
but trying to do anything involving the kernel (i.e. mutating `/boot`)
will end in sadness, and this puts a stop to that.

(This also relates of course to ye olde osbuild#18
 where we want the partitioning setup in the default case
 to come from the container)

Signed-off-by: Colin Walters <walters@verbum.org>
cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Feb 14, 2024
See containers/bootc#294
This is particularly motivated by CentOS/centos-bootc-dev#27
because with that suddenly `dnf` will appear to start working
but trying to do anything involving the kernel (i.e. mutating `/boot`)
will end in sadness, and this puts a stop to that.

(This also relates of course to ye olde osbuild#18
 where we want the partitioning setup in the default case
 to come from the container)

Signed-off-by: Colin Walters <walters@verbum.org>
cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Feb 14, 2024
See containers/bootc#294
This is particularly motivated by CentOS/centos-bootc-dev#27
because with that suddenly `dnf` will appear to start working
but trying to do anything involving the kernel (i.e. mutating `/boot`)
will end in sadness, and this puts a stop to that.

(This also relates of course to ye olde osbuild#18
 where we want the partitioning setup in the default case
 to come from the container)

Signed-off-by: Colin Walters <walters@verbum.org>
cgwalters added a commit to cgwalters/centos-bootc that referenced this issue Feb 14, 2024
This came out of discussion in CentOS/centos-bootc-dev#27

Basically...I think what we should emphasize in the future
is the combination of `bootc` and `dnf`.

There's no really strong reason to use `rpm-ostree` at container
build time versus `dnf`.  Now on the *client* side...well,
here's the interesting thing; with transient root enabled,
`dnf install` etc generally just works.

Of course, *persistent* changes don't.  However, anyone who
wants that can just `dnf install rpm-ostree` in their container
builds.

There is one gap that's somewhat important which is kernel arguments.
Because we haven't taught `grubby` do deal with ostree, and
we don't have containers/bootc#255
to change kargs per machine outside of install time one will
need to just hand-edit the configs in `/boot/loader`.

Another fallout from this is that `ostree container` goes away
inside the booted host...and today actually this totally
breaks bib until osbuild/bootc-image-builder#18
is fixed.

Probably bootc should grow the interception for that too optionally.
@cgwalters
Copy link
Contributor Author

This one also blocks CentOS/centos-bootc#314

github-merge-queue bot pushed a commit that referenced this issue Feb 15, 2024
See containers/bootc#294
This is particularly motivated by CentOS/centos-bootc-dev#27
because with that suddenly `dnf` will appear to start working
but trying to do anything involving the kernel (i.e. mutating `/boot`)
will end in sadness, and this puts a stop to that.

(This also relates of course to ye olde #18
 where we want the partitioning setup in the default case
 to come from the container)

Signed-off-by: Colin Walters <walters@verbum.org>
@mvo5
Copy link
Collaborator

mvo5 commented Feb 21, 2024

We need input on containers/bootc#357 before we can resolve this fully.

@mvo5
Copy link
Collaborator

mvo5 commented Mar 22, 2024

Sorry that this keeps dragging on. A quick status update about the PRs needed to make this a reality

But with those the final destination should be reached and we are "bootc install to-filesystem" all the way :)

@cgwalters
Copy link
Contributor Author

One thing I very belatedly realized, and we all should have earlier is that until we merge this, we have the same problem with bib that I papered over in containers/bootc#417

@mvo5
Copy link
Collaborator

mvo5 commented Apr 12, 2024

One thing I very belatedly realized, and we all should have earlier is that until we merge this, we have the same problem with bib that I papered over in containers/bootc#417

We are close (for real this time!):
osbuild/images#571
#342

both relatively small and (IMHO) nice.

@ondrejbudai
Copy link
Member

FTR, this was partly resolved by #342. We are still using the legacy pipeline for cross-arch building, but the native one already uses bootc. 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/should-be-bootc Bugs that will be fixed when we switch to using bootc
Projects
None yet
Development

No branches or pull requests

6 participants