-
Notifications
You must be signed in to change notification settings - Fork 884
rkt: lifecycle management #6
Comments
@vcaputo so braindumping here:
Then |
Excuse me if I'm wrong, but I think I'm missing something here.
As far as I can detect it,
|
See exit-watcher.service for how stage1/reaper.sh gets invoked on success. Have you observed nothing being created in We're still sorting out the status/gc/lifecycle side of rkt, the big question being: How does one distinguish active from inactive containers? An exclusive advisory lock on the container's We've been using nspawn in stage1 out of convenience but it's increasingly becoming less so as things mature. In the course of fleshing out the lifecycle details nspawn may end up being replaced entirely by something specialized and minimal. |
Mentioned briefly with code in #35 fwiw if anyone wants to dig into this further :-) (currently distracted by other priorities) |
Sorry didn't catched that one
I had the understanding that stage1 would be terminated after stage2 was launched, therefor creating something in this directory. But it appears that no status is created when stage2 is starting (and the app gets executed) as well as when you kill the process.
I see now why this is a problem, as I thought status would work in a different way. What about copying machinectl to stage1 as well so it can be used to get the status of the containers? |
Since rkt run is intended to function on non-systemd hosts as well we're not relying on systemd-specific facilities in the host for general functionality. We do have systemctl in stage1, but that's limited to interacting with the container's stage1 systemd instance. In the future we'll probably register with the host's systemd when available for improved integration, but that doesn't preclude the need for good general solutions. I've put together a hacky PR which gives us both working advisory locks and a recorded container pid here: #244 This is not an attractive long-term solution but it does facilitate primitives for gc, list, status, etc. I think it's a reasonable intermediate step enabling movement on the other pieces. |
@vcaputo Is there a document on the current state of rkt gc/etc that we can point users at? |
#414 is a first stab at an explanation |
We need to define how rkt knows how all of the processes running in a given stage1 have been destroyed and that the root filesystem can be cleaned up.
The text was updated successfully, but these errors were encountered: