Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support per-container and per-image storage backend #1784

Closed
onionjake opened this Issue Mar 21, 2016 · 5 comments

Comments

3 participants
@onionjake
Copy link

onionjake commented Mar 21, 2016

Required information

  • Distribution: Ubuntu
  • Distribution version: 16.04
  • The output of "lxc info":
apicompat: 0
auth: trusted
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
     <...snip...>
  driver: lxc
  driverversion: 2.0.0.rc11
  kernel: Linux
  kernelarchitecture: x86_64
  kernelversion: 4.4.0-14-generic
  server: lxd
  serverpid: 1823
  serverversion: 2.0.0.rc4
  storage: dir
  storageversion: ""
config:
  storage.zfs_pool_name: lxd_pool
public: false

Issue description

lxd reverts back to dir storage if zfs pool isn't available. It isn't available because it is on a encrypted luks device. lxd should fail instead of silently reverting back to dir

Steps to reproduce

  1. Setup luks device. sudo cryptsetup luksOpen <luks_device> lxd_volume
  2. Run sudo lxd init. Pick zfs, create new pool, and /dev/mapper/lxd_volume.
  3. lxc info shows storage: zfs.
  4. Reboot
  5. lxc info shows storage: dir. lxc list and lxc create work, but with dir storage. Expected it to error saying: lxd storage unavailable or similar.

@stgraber stgraber added this to the later milestone Mar 21, 2016

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Mar 21, 2016

The current behavior, while a bit odd also has its reasons:

  • We have support for automatic backends, like btrfs which do not need any user configuration. Failure to load the btrfs backend, for example because the btrfs tools aren't installed, isn't considered fatal as it may well be the way the system is meant to work.
  • We don't want to return an error, therefore preventing daemon startup if a backend fails to load as there would then be no way to unset it.
  • All our backends support "upgrading" from the directory backend, that is, if you have images that were loaded prior to setting your backend as zfs, btrfs or lvm, on first container creation, the image will be imported in your new backend for you. That's meant to reduce pain when changing backends and couldn't be done without the fallback mechanism.

So it doesn't seem like something we can really fix for 2.0. In the future we may want to add more logic to LXD wrt storage handling, like keeping track of what backend each individual container and images use, allowing multiple zpools, ... at which point we could re-architecture this code to return a per pool status and so allow starting LXD in degraded mode.

@onionjake

This comment has been minimized.

Copy link
Author

onionjake commented Mar 22, 2016

It was extremely surprising to me that I configured the storage to be on zfs in a specific pool and it still allowed me to create containers when the pool wasn't available. Not only do I want the containers to be on the ZFS mount so they get LUKS, the machine is configured such that /var/ would have filled up quickly using dir storage, which would also be very surprising!

I appreciate that trying to make lxd as robust as possible and just work is a great design goal as well. I am looking forward to how this could be addressed in the future to reduce surprises!

@stgraber stgraber changed the title lxd reverts to dir storage if zpool is unavailable Support per-container and per-image storage backend Apr 27, 2016

@gittygoo

This comment has been minimized.

Copy link

gittygoo commented Aug 16, 2016

This: "Support per-container and per-image storage backend" really needs to happen, we just got stung by this limitation while evaluating LXD.

Ideally when setting up a new container we should be able to specify under which pool it should be created.

Please make this happen, it is a current blocker for us on an LXD + CEPH setup where CEPH itself runs inside of LXD containers and provides zpools for further LXD containers.

Thanks for the great work so far

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Aug 16, 2016

It's absolutely planned but likely won't happen for another few months.

@stgraber stgraber modified the milestones: later, soon Nov 20, 2016

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Nov 20, 2016

Closing in favor of #2242 which pretty well defines the plan we have in mind.

@stgraber stgraber closed this Nov 20, 2016

@stgraber stgraber modified the milestones: soon, lxd-2.6 Nov 20, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.