Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct attach of ceph rbd/fs to containers #6174

Open
stgraber opened this issue Sep 9, 2019 · 9 comments

Comments

@stgraber
Copy link
Member

commented Sep 9, 2019

We've had a number of users who reached out wanting to connect an existing CEPH RBD or FS directly to a LXD container, without those coming from a LXD managed pool.

This should be doable by extending our current disk device, adding options to specify the backend (rbd/fs), the pool name, volume name and other CEPH related options.

@stgraber stgraber added the Feature label Sep 9, 2019
@stgraber stgraber modified the milestones: soon, later Sep 9, 2019
@ribsthakkar

This comment has been minimized.

Copy link

commented Sep 17, 2019

Hi @stgraber, my team with @abbykrish, @anusha-paul, and myself will work on this issue

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 18, 2019

@ribsthakkar assigned it to you for now, I'll need to have the other two comment so Github lets me assign it to them too.

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 23, 2019

Excellent, for this one, I'd expect roughly the following commits:

  • api: Add container_disk_ceph API extension
    • Add that string to shared/version/api.go
    • Add a matching entry to doc/api-extensions.md
  • lxd: Add support for CEPH RBD backed disks
    • Add the new options to lxd/device/disk.go
    • Follow and extend the logic currently used to handle block device mounts and combine that with the functions in storage_ceph.go and storage_ceph_utils.go to map and mount the needed RBD volume as needed.
  • lxd: Add support for CEPH FS backed disks
    • Add the new options to lxd/device/disk.go
    • Follow and extend the logic currently used to handle bind mounts and combine that with the functions in storage_cephfs.go to mount the needed FS volume as needed.
  • doc: Add support for CEPH backed disks
    • Add the new config options and extend the definition of source (probably needs its own separate paragraph).
  • tests: Add test for CEPH backed disks
    • Extend test/suites/container_devices_disk.sh to test the new configuration options. You'll want to make this conditional on LXD_CEPH_CLUSTER. If a cluster is set, then directly allocate a small 10MB RBD volume and pass it to a container.
    • Same goes for CEPH fs but this time checking LXD_CEPH_CEPHFS and creating a directory on that CEPH fs instance.
    • The test should validate that this works both on at boot time and when hot plugging the device into the container.

I suspect we'd want the options to look like:

  • ceph.user_name (default to admin)
  • ceph.cluster_name (default to ceph)
  • source be used to point to the ceph:<pool>/<volume> (RBD) or cephfs:<pool>/<path> (FS)

@tomponline and myself should be good contacts for this one.

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 23, 2019

@tomponline how does that sound to you? I'm not super fond of having to split disk through disktype but it'd be consistent with nic and it feels to me like adding two more device types for this would be even more confusing.

In theory we could do some clever parsing of source to avoid needing a disktype entirely, but distinguishing between ceph-rbd and ceph-fs then becomes a bit tricky.

@tomponline

This comment has been minimized.

Copy link
Member

commented Sep 24, 2019

@stgraber could we do something similar to LXC where we prefix a protocol to the source, e.g. ceph:<pool>/<volume> and cephfs:<pool>/<volume>? With nictype, many of the options available are different from others, but with disk would the majority of options still be available?

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 24, 2019

Yeah, we could do that. I would expect most options to apply, though size and limits may not in some cases.

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 24, 2019

Updated spec above to eliminate disktype and use a prefix in source instead

@anusha-paul

This comment has been minimized.

Copy link

commented Sep 27, 2019

Hi, could you add me to this issue?

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Sep 27, 2019

Done

@stgraber stgraber modified the milestones: later, soon Oct 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.