-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
qemu: implement boot-time checkin (via ovirt-guest-agent protocol) #458
Comments
[Tackling only one bit out of the larger context]
This sounds not very different from Azure boot check-in, which we are performing in a non-homogeneous way today (RHCOS does it in initramfs, FCOS does is after We could think about consolidating it in a "first-boot initramfs reached" check-in across various platforms in Afterburn, to be run before Ignition.
|
(forwarding some real time discussion here) I think you're absolutely right - we should think of this as "add a first-boot checkin to our qemu model", since that model already exists on some clouds. And then further discussion turned up https://wiki.qemu.org/Features/GuestAgent - so we could implement the minimum there in Afterburn, and have coreos-assembler time out if the guest doesn't reply to a sync pretty early on.
We can make the afterburn checkin not require networking on qemu, but I'm not very concerned about this TBH because qemu networking is quite fast. |
OK, moved this issue to afterburn. I took a quick look at implementing this. I'd like to propose that we have afterburn run itself as a systemd generator early on startup, rather than shipping static units; this would give us a clean way to order our guest startup unit invocation |
That doesn't look like a great fit. In particular, the protocol is unidirectional (host->guest) which means that we have to sit in initramfs waiting to be polled (instead of actively signaling a guest->host event, like we do on Azure and Packet), and we can't really leave the initramfs until we have been polled. Perhaps a more suitable protocol to target would be the ovirt-guest-agent one, which seems to support sending guest->host events. Conveniently, it already defines a system-startup event. |
You're right, once I started working on the code I noticed the inversion of control. Discussing the oVirt protocol though gets into the much bigger topic of whether we want to try to implement more of the protocol for real as an agent on that platform, and how the platform would behave with our likely-to-be-a-subset of the functionality. I guess as a start we could just respond to the channel on |
Additional note: on Azure the firstboot check-in also ejects the Virtual CD (paging @darkmuggle for confirmation), so I fear we cannot really check in before Ignition fetching is completed. |
Right, this comment proposes making our systemd units platform-dependent. (Also we discussed a while changing Ignition on Azure to save the config to |
We don't need to do it as a generator, though, right? We can just ship some static units with |
Yeah; the duplication there might get ugly but OTOH I guess we could generate the units statically i.e. as part of the build process. |
@cgwalters pointed me to this ticket, so for the record, as he knows, in the last week I've been working on running openQA tests on Fedora CoreOS. It's not very difficult, and we have it working already, the only question mark for a 'production' deployment would be when and on what to trigger the tests. openQA definitely does do a fairly good job of letting you know if the artifact under test boots successfully in a VM. For now the work is living on a branch of the openQA test repo and is only deployed on my pet openQA instance which is not up all the time (it heats up my office...:>), we can do a production deployment quite easily once the triggering questions are sorted out. |
To be clear kola already covers this pretty well in general - we just have a few specific gaps, such as the case when a Secure Boot signature validation fails. |
for the record once more, we did deploy the openQA CoreOS testing stuff to production. The scheduling works by just checking once an hour if any of the named streams has been updated, and scheduling tests for the new build if so. Results show up at https://openqa.fedoraproject.org/group_overview/1?limit_builds=100 , e.g. here. We can write/run more tests if desired, requests can be filed at https://pagure.io/fedora-qa/os-autoinst-distri-fedora/issues . |
Currently if FCOS fails in very early boot (e.g. in the bootloader or kernel, before switching to the initramfs), it's...hard to detect consistently. For example, we have a test for Secure Boot, but it turns out that if the kernel fails to verify then...coreos-assembler today hangs for a really long time.
We can scrape the serial console in most cases, but we configure the Live ISO not to log to a serial console...so we'd end up having instead to do something like OpenQA and do image recognition on the graphics console 😢
Now we debated this somewhat in
coreos/ignition-dracut#170
and I argued strongly that the most important thing was to cover the "failure in initramfs" case, and we could support the "stream journal in general" by injecting Ignition.
In retrospect...I think I was wrong. It would be extremely useful for us to stream the journal starting from the initramfs at least by default on qemu.
In particular, what we really want is some sort of message from the VM that it has entered the initramfs, but before we start processing Ignition. If we're doing things like reprovisioning the rootfs, it becomes difficult to define a precise "timeout bound". But I think we can e.g. reliably time out after something quite low (like 10 seconds) if we haven't seen the "entered initramfs" message.
So here's my proposal:
The text was updated successfully, but these errors were encountered: