New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker storage driver support #2305

Closed
henrylawson opened this Issue Aug 23, 2016 · 18 comments

Comments

5 participants
@henrylawson
Copy link

henrylawson commented Aug 23, 2016

Running Docker inside LXC on an Ubuntu Xenial image results in VFS being chosen as the default storage driver. VFS results in significant disk space usage and slow performance.

  1. Is it possible to configure/select another? From attempting to configure other drivers I have found that:
    • overlay/overlay2 is not possible if the host is zfs (as documented by docker)
    • aufs does not seem to be available in /proc/filesystems (a prerequisite step in the docker docs)
    • zfs in zfs also appears to not be be possible too
  2. Could the steps to achieve that be shared?

https://docs.docker.com/engine/userguide/storagedriver/selectadriver/

Required information

Ubuntu 16.04.1 LTS
lxd --version
2.0.3
lxc info:
  driver: lxc
  driverversion: 2.0.3
  kernel: Linux
  kernelarchitecture: x86_64
  kernelversion: 4.4.0-34-generic
  server: lxd
  serverpid: 2262
  serverversion: 2.0.3
  storage: zfs
  storageversion: "5

Issue description

A brief description of what failed or what could be improved.

Steps to reproduce

lxc launch ubuntu-daily:16.04 docker -p default -p docker
lxc exec docker -- apt update
lxc exec docker -- apt dist-upgrade -y
lxc exec docker -- apt install docker.io -y
lxc exec docker -- docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.11.2
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge
Kernel Version: 4.4.0-34-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 4 GiB
Name: docker
ID: LFOL:35Y6:DCVV:5P5P:GQKW:MBV6:LOYV:3XWS:LC67:R46I:2JVJ:E6GV
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

@henrylawson henrylawson changed the title Docker Storage Driver Support Docker storage driver support Aug 23, 2016

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Aug 26, 2016

So you're right that on zfs, this is a bit of a problem with overlay not working. zfs nesting isn't possible. aufs may be fine though it may also have the same problem as overlay.

To have LXD load the aufs driver for you, you can do:

lxc profile edit docker

And then add the aufs module to the linux.kernel_modules line next to the overlay one.

Other than that, there's very little LXD itself can do about it. We can't list out of tree drivers like aufs in a profile as that'd break on all distros that don't ship it. And as much as we'd love to have zfs nesting, there's no such thing right now (or even being actively worked on my the zfsonlinux folks).

I'm going to close this issue as there's nothing actionable for us to do.

@stgraber stgraber closed this Aug 26, 2016

@henrylawson

This comment has been minimized.

Copy link

henrylawson commented Oct 4, 2016

To share back my experience.

I ended up setting up my LXD host to use BTRFS, rather than ZFS. Using BTRFS means that I can't set disk limits per LXD container - using LXD but for my use case, that is OK.

With BTRFS on the host, the LXD container is using BTRFS and as such, the Docker container was able to run with BTRFS.

It is also worth calling out that I needed to set "user_subvol_rm_allowed" as a mount option on my BTRFS mount on my host.

Some good discussion in the bog and comments below:
https://www.stgraber.org/2016/04/14/lxd-2-0-lxd-in-lxd-812/
https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Oct 4, 2016

Quotas actually work with btrfs, they're just not visible in "df" output as they would be for zfs.

@RyanEwen

This comment has been minimized.

Copy link

RyanEwen commented Jan 6, 2018

I am using Proxmox which uses LXC for containers, and noticed that Docker is extremely slow inside my containers.

I also noticed that Proxmox uses RAW QEMU image files to store LXC filesystems.

Do you think it would be possible to format the image using BTRFS somehow, without changing the host? I've been searching for info on this for hours. Also posted in the Proxmox forum but this seems semi-related to what you guys are doing in here.

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented Jan 8, 2018

If proxmox uses raw qemu disks for LXC containers, then it should be possible, so long as you can tell proxmox to format that raw disk using btrfs rather than ext4.

@RyanEwen

This comment has been minimized.

Copy link

RyanEwen commented Jan 10, 2018

That's what I am thinking. I guess I just need to figure out how to specify the filesystem type. Or see if I can convert an existing image somehow.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

@stgraber I would use btrfs for LXD but it is a bit slower than ZFS and most of all the quotas are escapable.
So I would like to stick to ZFS but Docker should run with a fast storage driver inside the containers which seems to be impossible with ZFS.

What is the best way to proceed?
Can we install a fast storage driver with ZFS backend?
Should I use btrfs and hope the quotas get fixed?

Thanks in advance!

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented May 22, 2018

@michacassola One thought would be to setup two storage pools in LXD, a ZFS one for your containers and a btrfs one for Docker, then create a volume on the btrfs one and attach that to /var/lib/docker or wherever docker writes its stuff?

That'd have the rest of the container be under ZFS' control and quotas and only have Docker be on btrfs, using a separate, possibly smaller storage pool.

Another alternative, but this time mostly outside of LXD itself would be to use a ZFS volume, format that as btrfs and then have it mounted on /var/lib/docker inside the container.

Something kinda like (untested):

zfs create -V 20GB my-pool/docker/blah
mkfs.btrfs /dev/zvol/my-pool/docker/blah
lxc config device add my-container docker disk source=/dev/zvol/my-pool/docker/blah path=/var/lib/docker
@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

What about LVM as a backend, would there be a more straight forward solution to have:

  • good performance for both LXD and Docker inside LXD (without having to fiddle with volumes)
  • quotas that are enforced

?

PS: Is there updated info for https://lxd.readthedocs.io/en/stable-2.0/storage-backends/#feature-comparison ?

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented May 22, 2018

Nope because LVM is global to the system so can't/shouldn't be exposed to Docker inside the container.
If using LVM, the Docker container would effectively be backed by ext4 or xfs, restricting your options to aufs/overlay2.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

But last I checked ext4 and xfs performance should be great for Docker with Overlay2 (or anything else)?
So programs and Docker Containers should run fine/fast inside the Linux Container, right?

But the quota question remains. Last I fiddled with LVM I could set a per volume size. So why does LXD not support a fixed size for a Container on a LVM pool?

@stgraber

This comment has been minimized.

Copy link
Member

stgraber commented May 22, 2018

That's because the container's image is the LV, the container is then a snapshot of the LV, making it inherit its size. You should be able to grow it after the fact though, I haven't played with our LVM storage in a while, @brauner may remember better :)

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

Hm, please let me know if I am completely off now:

If a container would get its own LV it could be easily integrated to include a specific size to set the LV to. That is what I meant before.

I certainly would not mind the bit of extra storage needed to separate the container LV from the image LV. I would also be able to live with the extra time needed to clone the LV over just snapshotting.
In this way we would get quota with LVM. Which outways the downside in my opinion.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

@brauner mentioned something here: #3285

by using the new volatile key we can set volatile.apply_quota: 10GB and then on next container start make sure that the quota (or resize in the case of lvm) is applied.

So I guess I will have to set the standard size to the minimum:
sudo lxc storage set lvmpool volume.size 10GB
And once the Container is created I do: sudo lxc config set lvmcontainer volatile.apply_quota 50GB
And restart: sudo lxc restart lvmcontainer
Then my containers LV should have the new size of 50GB.

Please confirm.

@brauner

This comment has been minimized.

Copy link
Member

brauner commented May 22, 2018

@michacassola, please be aware that LVM and filesystems on top of it are fickle little beasts. It might work fine it might not. Resizing filesystems is not a very reliable thing. But in theory it should work.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 22, 2018

@brauner Thanks!
What about my suggestion to separate the Image and Container LVs. Then we could set the LV size while creating the Container.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 23, 2018

Also Why can't we get AUFS working on a ZFS backend again? I tried adding the kernel module to the container but it didn't help.

@michacassola

This comment has been minimized.

Copy link

michacassola commented May 23, 2018

I tried using LVM but docker still uses VFS as the storage driver. How come??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment