Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
/var/lib/lxd/containers/<name>/rootfs only exists when container is running. #3784
Comments
|
Yes, that is expected behavior. Keeping all the containers mounted was causing a number of kernel issues, so we're only mounting things on demand now. Direct access to the container's filesystem at /var/lib/lxd/containers/NAME/rootfs has never been part of any documented/supported way of interacting with LXD. Interactions with the container's filesystem should be done through our file management API or through an exec session after the container is started, anything else isn't supported and is subject to changes. In your case, from what I remember of what you're doing, the easiest option is probably for you to just manually mount the container before doing any changes. You can do so with "zfs mount" and "zfs unmount". The relevant paths can be found in "zfs list -t all". |
|
Is |
|
I don't believe "zfs mount -a" mounts the filesystems that are flagged as "canmount=false". In any case, I wouldn't recommend mounting everything because that's just asking for the zfs EBUSY kernel bug to bite you. You should really only ever mount what you need and nothing more, making sure that things are unmounted before starting the container so that we avoid duplicate mount table entries that confuse the heck out of zfs. |
|
the other issue with zfs (or any other backend) is that i have to have backend specific knowledge and determine which backend is in use. |
|
Yep, but that part isn't new. We've never kept containers mounted when they're on LVM or Ceph. |
|
I guess i'd argue that any problems you had due to "Keeping all the containers mounted" are still inevitable. If there are kernel issues then you just pushed finding these issues to when a user starts running a bunch of containers all at once rather than just having a bunch of containers at rest (and who does that?) |
|
Not really. The problem we had was that if containers were auto-mounted, then daemons starting at boot time (snapd for example) would potentially fork your mount table, confusing ZFS among other things. With them only being mounted immediately before the container's mount namespace is created, this avoids that problem entirely. Believe me, we don't like doing those kind of annoying multi-release changes just because it's fun. We do them because we've confirmed that they do fix a problem. |
jfgibbins
commented
Sep 8, 2017
|
Was that a recent change? Cause I used to ls through those directories all the time. At what density did you start to see issues? I've been up around 40-50, but hadn't noticed anything. BTW, Stephane, Christian and Scott all in one thread, scary! |
|
We've seen issues with as low as 4-5 containers, the problem tended to be more closely related to the number of mount namespaces, when those were created and whether the kernel would reap them quickly enough, more than the number of containers (number of containers would make the mount table bigger, making it take longer to flush). |
|
And those directories will still work fine for most users, so long as the container is running. It's only when it's not running that we keep it unmounted. |
smoser commentedSep 7, 2017
Required information
Distribution: Ubuntu
Distribution version: 17.10
The output of "lxc info" or if that fails:
Issue description
the directory /var/lib/lxd/containers//rootfs only exists when the container is running.
This makes some of our work flows based on 'lxd init ... change contents ... lxd start' not work correctly, specifically with 'mount-image-callback'
lxd launch ubuntu-daily:xenial x1look for directory in expected path.
Alternatively, try 'mount-image-callback lxd:x2 mchroot`
Information to attach
dmesg)lxc info NAME --show-log)cat /var/log/lxd/lxd.log)