New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs-utils based zfs graph driver #9411

Merged
merged 9 commits into from May 11, 2015

Conversation

Projects
None yet
@Mic92
Contributor

Mic92 commented Nov 30, 2014

  • Use go-zfs for robust commandline handling
  • implements complete driver interface including the differ interface
  • should also works on non-linux platforms

current status: works on my machine(tm)
I am looking for testers with different zfs setups to find more bugs

Usage (a little bit more verbose for testers):

  • the following lines can be tested without messing up your current docker setup
  • replace zroot/docker with your preferred zfs dataset
$ git clone https://github.com/Mic92/docker
$ cd docker
$ git checkout features/zfs-rebased
$ export AUTO_GOPATH=1
$ hack/make.sh dynbinary
root> zfs create -o com.sun:auto-snapshot=false -o dedup=off -o mountpoint=/var/lib/docker-zfs zroot/docker
root> bundles/1.4.1-dev/dynbinary/docker -p /var/run/docker-zfs.pid -H tcp://127.0.0.1:2376 -s zfs -g /var/lib/docker-zfs -d -D
# spawn another shell and go to your favorite docker project and hit
$ export DOCKER_HOST=tcp://127.0.0.1:2376
$ bundles/1.4.1-dev/dynbinary/docker --pidfile=/var/run/docker-zfs.pid build .

You can also overwrite the used dataset with --storage-opt zfs.fsname=zroot/docker, when starting the docker daemon.

Related to moby/moby#7901

PKGBUILD for archlinux: https://aur.archlinux.org/packages/docker-zfs/

ZFS Quickstart (from a loopback device)

  1. Install zfs:
    on Ubuntu:
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap spl-dkms zfs-dkms ubuntu-zfs

on Arch:
install zfs-git from AUR

  1. Create your zpool:
# truncate -s 2G /tmp/zfs.img
# zpool create dockertest /tmp/zfs.img
@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Dec 6, 2014

Contributor

@Mic92 Does this PR replace #7901?

Contributor

discordianfish commented Dec 6, 2014

@Mic92 Does this PR replace #7901?

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92
Contributor

Mic92 commented Dec 7, 2014

@Novex

This comment has been minimized.

Show comment
Hide comment
@Novex

Novex Dec 8, 2014

I had a play today and found a problem when building images containing hardlinks.

When doing a zfs diff, sometimes a fourth field exists representing the changes in the link count of a file. This breaks the diff parser which only expects 3 fields. The build doesn't complete and the following errors are logged:

client:

2014/12/08 17:59:59 Failed to parse line 2 of zfs diff: Mismatching number of fields: expect 3, got: 4, got: '[M F /tank/docker-compressed/zfs/graph/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181/tmp/hardlink (+1)]'

server:

DEBU[3946] [zfs] zfs diff -FH tank/docker-compressed/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181-init@558576856 tank/docker-compressed/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181
Failed to parse line 2 of zfs diff: Mismatching number of fields: expect 3, got: 4, got: '[M F /tank/docker-compressed/zfs/graph/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181/tmp/hardlink (+1)]'
INFO[3946] -job build() = ERR (1)

This dockerfile will reproduce the error:

FROM busybox
RUN touch /tmp/hardlink
RUN ln /tmp/hardlink /tmp/hardlinked

Novex commented Dec 8, 2014

I had a play today and found a problem when building images containing hardlinks.

When doing a zfs diff, sometimes a fourth field exists representing the changes in the link count of a file. This breaks the diff parser which only expects 3 fields. The build doesn't complete and the following errors are logged:

client:

2014/12/08 17:59:59 Failed to parse line 2 of zfs diff: Mismatching number of fields: expect 3, got: 4, got: '[M F /tank/docker-compressed/zfs/graph/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181/tmp/hardlink (+1)]'

server:

DEBU[3946] [zfs] zfs diff -FH tank/docker-compressed/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181-init@558576856 tank/docker-compressed/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181
Failed to parse line 2 of zfs diff: Mismatching number of fields: expect 3, got: 4, got: '[M F /tank/docker-compressed/zfs/graph/c2830746bf301f383fced7e74d28e8440144af13d4eb7bb4b27045960a5af181/tmp/hardlink (+1)]'
INFO[3946] -job build() = ERR (1)

This dockerfile will reproduce the error:

FROM busybox
RUN touch /tmp/hardlink
RUN ln /tmp/hardlink /tmp/hardlinked
@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 8, 2014

Contributor

@Novex confirmed. I will make a fix for go-zfs soon.

Contributor

Mic92 commented Dec 8, 2014

@Novex confirmed. I will make a fix for go-zfs soon.

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 8, 2014

Contributor

@Novex mistifyio/go-zfs#26 wait for merge...Fixed now!

Contributor

Mic92 commented Dec 8, 2014

@Novex mistifyio/go-zfs#26 wait for merge...Fixed now!

@Novex

This comment has been minimized.

Show comment
Hide comment
@Novex

Novex Dec 9, 2014

Nice! I can verify that images with hardlinks build properly now.

I'm guessing zfs datasets, images & containers should persist across restarts? I'm getting weird behaviour after I restart docker - the zfs datasets and docker containers still exist but the docker images list is empty. If a container is started it doesn't work properly, presumably due to the missing images.

After restarting, if you docker pull (or do anything that causes one, eg. build or run) then all existing zfs datasets are destroyed (!) and replaced by the new one created by the docker pull. This includes the dataset for any containers.

It's kind of weird to explain, I've gone through each step and checked the state of docker and zfs, the results and the docker debug log are at https://gist.github.com/Novex/aa0bf63e240564d893fb

I'm testing on Ubuntu 14.10 (Linux ubuntu 3.16.0-23-generic #31-Ubuntu SMP Tue Oct 21 17:56:17 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)

Novex commented Dec 9, 2014

Nice! I can verify that images with hardlinks build properly now.

I'm guessing zfs datasets, images & containers should persist across restarts? I'm getting weird behaviour after I restart docker - the zfs datasets and docker containers still exist but the docker images list is empty. If a container is started it doesn't work properly, presumably due to the missing images.

After restarting, if you docker pull (or do anything that causes one, eg. build or run) then all existing zfs datasets are destroyed (!) and replaced by the new one created by the docker pull. This includes the dataset for any containers.

It's kind of weird to explain, I've gone through each step and checked the state of docker and zfs, the results and the docker debug log are at https://gist.github.com/Novex/aa0bf63e240564d893fb

I'm testing on Ubuntu 14.10 (Linux ubuntu 3.16.0-23-generic #31-Ubuntu SMP Tue Oct 21 17:56:17 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 9, 2014

Contributor

The main issue here is, that the images disappear after restart. The current implementation use zfs destroy -R to destroy all depending datasets to clean-up after an aborted build, because datasets clones cannot exists without their origins. I hope I can reproduce the issue on my machine.

Contributor

Mic92 commented Dec 9, 2014

The main issue here is, that the images disappear after restart. The current implementation use zfs destroy -R to destroy all depending datasets to clean-up after an aborted build, because datasets clones cannot exists without their origins. I hope I can reproduce the issue on my machine.

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 9, 2014

Contributor

Ok. It currently looks like an issue in the graph database to me. I will have a closer look at this:

2014/12/09 13:11:07 Warning: couldn't load e72ac664f4f0c6a061ac4ef332557a70d69b0c624b6add35f1c181ff7fff2287 from busybox/latest: no such id: e72ac664f4f0c6a061ac4ef332557a70d69b0c624b6add35f1c181ff7fff2287
Contributor

Mic92 commented Dec 9, 2014

Ok. It currently looks like an issue in the graph database to me. I will have a closer look at this:

2014/12/09 13:11:07 Warning: couldn't load e72ac664f4f0c6a061ac4ef332557a70d69b0c624b6add35f1c181ff7fff2287 from busybox/latest: no such id: e72ac664f4f0c6a061ac4ef332557a70d69b0c624b6add35f1c181ff7fff2287
@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 10, 2014

Contributor

@Novex fixed! This was a really stupid simple bug. Thanks for you good detailed bug report.

Contributor

Mic92 commented Dec 10, 2014

@Novex fixed! This was a really stupid simple bug. Thanks for you good detailed bug report.

@Novex

This comment has been minimized.

Show comment
Hide comment
@Novex

Novex Dec 10, 2014

@Mic92 thanks for fixing it so quick! All images & containers happily came back!

I have another one for you - back to hardlinks. I didn't actually notice earlier but it looks like hardlinks aren't persisted across build commits.

Using the following dockerfile:

FROM busybox
RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
RUN ls -l /tmp

Using normal storage the output is:

Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM busybox
busybox:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
df7546f9f060: Pull complete
e433a6c5b276: Pull complete
e72ac664f4f0: Pull complete
Status: Downloaded newer image for busybox:latest
 ---> e72ac664f4f0
Step 1 : RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
 ---> Running in 39f133a9e9f0
total 4
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> 2e9cf89185f0
Removing intermediate container 39f133a9e9f0
Step 2 : RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
 ---> Running in c84478e84e85
total 4
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlinkedInNewCommit
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> 1e5b36aaf9db
Removing intermediate container c84478e84e85
Step 3 : RUN ls -l /tmp
 ---> Running in b56d47ad5c26
total 4
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    1 root     root             0 Dec 10 15:12 hardlinkedInNewCommit
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> a5721dfee46b
Removing intermediate container b56d47ad5c26
Step 4 : CMD /bin/sh
 ---> Running in ce6c45677769
 ---> e259ae7cd218
Removing intermediate container ce6c45677769
Successfully built e259ae7cd218

However using zfs storage, the hardlinks are created but do not survive across commits with the final image not having any hardlinks.

Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM busybox
 ---> e72ac664f4f0
Step 1 : RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
 ---> Running in b94891421e85
total 3
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlinkedInSameCommit
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 84605a715fb7
Removing intermediate container b94891421e85
Step 2 : RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
 ---> Running in 6c911dd5e68c
total 3
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlinkedInNewCommit
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 5ef8cfd11d73
Removing intermediate container 6c911dd5e68c
Step 3 : RUN ls -l /tmp
 ---> Running in c8308a017ad4
total 2
-rw-r--r--    1 root     root             0 Dec 10 15:20 hardlink
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 5797cc24cd3d
Removing intermediate container c8308a017ad4
Successfully built 5797cc24cd3

docker debug log for the build output.

Novex commented Dec 10, 2014

@Mic92 thanks for fixing it so quick! All images & containers happily came back!

I have another one for you - back to hardlinks. I didn't actually notice earlier but it looks like hardlinks aren't persisted across build commits.

Using the following dockerfile:

FROM busybox
RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
RUN ls -l /tmp

Using normal storage the output is:

Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM busybox
busybox:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
df7546f9f060: Pull complete
e433a6c5b276: Pull complete
e72ac664f4f0: Pull complete
Status: Downloaded newer image for busybox:latest
 ---> e72ac664f4f0
Step 1 : RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
 ---> Running in 39f133a9e9f0
total 4
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> 2e9cf89185f0
Removing intermediate container 39f133a9e9f0
Step 2 : RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
 ---> Running in c84478e84e85
total 4
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlinkedInNewCommit
-rw-r--r--    3 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> 1e5b36aaf9db
Removing intermediate container c84478e84e85
Step 3 : RUN ls -l /tmp
 ---> Running in b56d47ad5c26
total 4
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlink
-rw-r--r--    1 root     root             0 Dec 10 15:12 hardlinkedInNewCommit
-rw-r--r--    2 root     root             0 Dec 10 15:12 hardlinkedInSameCommit
drwxr-xr-x    2 root     root          4096 May 22  2014 ldconfig
 ---> a5721dfee46b
Removing intermediate container b56d47ad5c26
Step 4 : CMD /bin/sh
 ---> Running in ce6c45677769
 ---> e259ae7cd218
Removing intermediate container ce6c45677769
Successfully built e259ae7cd218

However using zfs storage, the hardlinks are created but do not survive across commits with the final image not having any hardlinks.

Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM busybox
 ---> e72ac664f4f0
Step 1 : RUN touch /tmp/hardlink && ln /tmp/hardlink /tmp/hardlinkedInSameCommit && ls -l /tmp
 ---> Running in b94891421e85
total 3
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlinkedInSameCommit
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 84605a715fb7
Removing intermediate container b94891421e85
Step 2 : RUN ln /tmp/hardlink /tmp/hardlinkedInNewCommit && ls -l /tmp
 ---> Running in 6c911dd5e68c
total 3
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlink
-rw-r--r--    2 root     root             0 Dec 10 15:20 hardlinkedInNewCommit
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 5ef8cfd11d73
Removing intermediate container 6c911dd5e68c
Step 3 : RUN ls -l /tmp
 ---> Running in c8308a017ad4
total 2
-rw-r--r--    1 root     root             0 Dec 10 15:20 hardlink
drwxr-xr-x    2 root     root             2 May 22  2014 ldconfig
 ---> 5797cc24cd3d
Removing intermediate container c8308a017ad4
Successfully built 5797cc24cd3

docker debug log for the build output.

@alexanderhaensch

This comment has been minimized.

Show comment
Hide comment
@alexanderhaensch

alexanderhaensch Dec 10, 2014

works for me on a 24Disk ZFS pool on gentoo. thanks for this!
The only thing that zfs list is now very much polluted by all the different images and snapshots.
Is it maybe possible to hide the children of the docker filesystem?

alexanderhaensch commented Dec 10, 2014

works for me on a 24Disk ZFS pool on gentoo. thanks for this!
The only thing that zfs list is now very much polluted by all the different images and snapshots.
Is it maybe possible to hide the children of the docker filesystem?

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 10, 2014

Contributor

@Novex The hardlink issue is because of a 3-years old bug in zfs diff https://www.illumos.org/issues/1691
This will need some further work. Working on a solution...
@alexanderhaensch good to see docker scaling! I am not aware of a propery to hiding zfs datasets. My current workaround is using zfs list | grep -v zroot/docker. https://www.illumos.org/issues/2745 would be the solution.

Contributor

Mic92 commented Dec 10, 2014

@Novex The hardlink issue is because of a 3-years old bug in zfs diff https://www.illumos.org/issues/1691
This will need some further work. Working on a solution...
@alexanderhaensch good to see docker scaling! I am not aware of a propery to hiding zfs datasets. My current workaround is using zfs list | grep -v zroot/docker. https://www.illumos.org/issues/2745 would be the solution.

Show outdated Hide outdated .travis.yml
Show outdated Hide outdated Dockerfile
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Dec 11, 2014

Contributor

@Mic92 Generally speaking, I don't like the idea of having to go through the process of sending a PR against go-zfs to fix a problem in this zfs graph driver.

Contributor

unclejack commented Dec 11, 2014

@Mic92 Generally speaking, I don't like the idea of having to go through the process of sending a PR against go-zfs to fix a problem in this zfs graph driver.

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 11, 2014

Contributor

@unclejack so what's your proposal on go-zfs? Ripping out the functionality needed for docker? Fork the project into docker/go-zfs? Both projects are licensed under the same Apache License

Contributor

Mic92 commented Dec 11, 2014

@unclejack so what's your proposal on go-zfs? Ripping out the functionality needed for docker? Fork the project into docker/go-zfs? Both projects are licensed under the same Apache License

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Dec 18, 2014

Contributor

@Mic92 I've thought about this for a bit. I think the best thing for us to do would be to fork go-zfs, vendor from the fork and contribute upstream. That way we could solve all issues.

On one hand, pulling all of this code into Docker would mean that we wouldn't actually contribute our code upstream to any go ZFS cli wrapper and nobody would be able to use it. The other extreme would be bundling straight from upstream and we'd have problems getting what we need right away when we need to fix a bug.

@crosbymichael @erikh @icecrime @jfrazelle @LK4D4 @tiborvass What do you think?

Contributor

unclejack commented Dec 18, 2014

@Mic92 I've thought about this for a bit. I think the best thing for us to do would be to fork go-zfs, vendor from the fork and contribute upstream. That way we could solve all issues.

On one hand, pulling all of this code into Docker would mean that we wouldn't actually contribute our code upstream to any go ZFS cli wrapper and nobody would be able to use it. The other extreme would be bundling straight from upstream and we'd have problems getting what we need right away when we need to fix a bug.

@crosbymichael @erikh @icecrime @jfrazelle @LK4D4 @tiborvass What do you think?

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Dec 18, 2014

Member

Wouldn't forking it right now be a little premature? We haven't given them a chance to show how responsive they are to issues yet, and forking later (if/when there are issues we need solved faster) won't really be functionally much different from forking now. It'd also keep our packagers happier with us.

Member

tianon commented Dec 18, 2014

Wouldn't forking it right now be a little premature? We haven't given them a chance to show how responsive they are to issues yet, and forking later (if/when there are issues we need solved faster) won't really be functionally much different from forking now. It'd also keep our packagers happier with us.

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Dec 18, 2014

Contributor

I made several pull request for features needed by docker-zfs and they were all merged in less then a day: https://github.com/mistifyio/go-zfs/pulls?q=is%3Apr+is%3Aclosed+author%3AMic92

Contributor

Mic92 commented Dec 18, 2014

I made several pull request for features needed by docker-zfs and they were all merged in less then a day: https://github.com/mistifyio/go-zfs/pulls?q=is%3Apr+is%3Aclosed+author%3AMic92

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Dec 19, 2014

Contributor

Why not just vendor it but not fork / still use the upstream version?

Contributor

discordianfish commented Dec 19, 2014

Why not just vendor it but not fork / still use the upstream version?

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Dec 23, 2014

Collaborator

@unclejack @Mic92 I vote for vendoring upstream, and sending PRs to upstream. No need to fork now.

Collaborator

tiborvass commented Dec 23, 2014

@unclejack @Mic92 I vote for vendoring upstream, and sending PRs to upstream. No need to fork now.

@icecrime icecrime referenced this pull request Jan 6, 2015

Closed

Implement Docker on ZFS #7901

@bakins

This comment has been minimized.

Show comment
Hide comment
@bakins

bakins Jan 6, 2015

I hadn't been following this PR, but I'm on the team that writes/maintains go-zfs. Generally, as long as we don't break core functionality, we are open to PR's and/or help with needed features.

bakins commented Jan 6, 2015

I hadn't been following this PR, but I'm on the team that writes/maintains go-zfs. Generally, as long as we don't break core functionality, we are open to PR's and/or help with needed features.

@unclejack unclejack self-assigned this Jan 13, 2015

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jan 13, 2015

Contributor

Assigned to @unclejack

Just make sure we have build tags for the driver and check with @tianon on the deps.

LGTM

Contributor

crosbymichael commented Jan 13, 2015

Assigned to @unclejack

Just make sure we have build tags for the driver and check with @tianon on the deps.

LGTM

Show outdated Hide outdated Dockerfile
Show outdated Hide outdated project/vendor.sh
@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Jan 26, 2015

Member

Can we also add a little something to https://github.com/docker/docker/blob/master/project/PACKAGERS.md#optional-dependencies mentioning the lower bound on versions that are acceptable for the ZFS tools?

Member

tianon commented Jan 26, 2015

Can we also add a little something to https://github.com/docker/docker/blob/master/project/PACKAGERS.md#optional-dependencies mentioning the lower bound on versions that are acceptable for the ZFS tools?

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Feb 7, 2015

Collaborator

@Mic92 Needs rebase + address @tianon's concerns, thanks!

Collaborator

tiborvass commented Feb 7, 2015

@Mic92 Needs rebase + address @tianon's concerns, thanks!

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Feb 7, 2015

Contributor

Changes

  • rebased against master
  • apply api changes in driver
  • secure ppa gpg key import
  • switch to go-zfs upstream release
  • add packaging documentation

@tianon The driver does not use any new feature of zfs, as the minimum kernel required for docker is 3.8 and zfs-utils is bound to its kernel module version, there aren't any known incompatibilities. In theory 0.4.1 from 2009 would be the first usable zfsonlinux release, but one won't get it compiled with newer kernels.

Contributor

Mic92 commented Feb 7, 2015

Changes

  • rebased against master
  • apply api changes in driver
  • secure ppa gpg key import
  • switch to go-zfs upstream release
  • add packaging documentation

@tianon The driver does not use any new feature of zfs, as the minimum kernel required for docker is 3.8 and zfs-utils is bound to its kernel module version, there aren't any known incompatibilities. In theory 0.4.1 from 2009 would be the first usable zfsonlinux release, but one won't get it compiled with newer kernels.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Feb 7, 2015

Member

I think it would be nice to have checks added to the contrib/check-config.sh script, so that users are able to check if all the requirements are in place to use ZFS.

Member

thaJeztah commented Feb 7, 2015

I think it would be nice to have checks added to the contrib/check-config.sh script, so that users are able to check if all the requirements are in place to use ZFS.

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 Feb 7, 2015

Contributor

@thaJeztah done

Contributor

Mic92 commented Feb 7, 2015

@thaJeztah done

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Feb 7, 2015

Member

@Mic92 that was fast, thanks!

Member

thaJeztah commented Feb 7, 2015

@Mic92 that was fast, thanks!

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Feb 7, 2015

Collaborator

@unclejack @tianon @jfrazelle @LK4D4 please test and review :)

Collaborator

tiborvass commented Feb 7, 2015

@unclejack @tianon @jfrazelle @LK4D4 please test and review :)

@tiborvass tiborvass added this to the 1.6.0 milestone Feb 7, 2015

else
wrap_bad "$1 command" 'missing'
fi
}

This comment has been minimized.

@tianon

tianon Feb 9, 2015

Member

I really like this! 👍

I wonder if maybe we should have a check_device too (for consistency), something like:

check_device() {
    if [ -c "$1" ]; then
        wrap_good "$1" 'present'
    else
        wrap_bad "$1" 'missing'
    fi
}

(and then using it below for /dev/zfs, of course)

@tianon

tianon Feb 9, 2015

Member

I really like this! 👍

I wonder if maybe we should have a check_device too (for consistency), something like:

check_device() {
    if [ -c "$1" ]; then
        wrap_good "$1" 'present'
    else
        wrap_bad "$1" 'missing'
    fi
}

(and then using it below for /dev/zfs, of course)

@GordonTheTurtle GordonTheTurtle added dco/yes and removed dco/no labels Apr 19, 2015

@maci0

This comment has been minimized.

Show comment
Hide comment
@maci0

maci0 commented Apr 20, 2015

+1

@icecrime icecrime removed the dco/yes label Apr 23, 2015

@calavera calavera referenced this pull request Apr 29, 2015

Closed

Filesystem Quotas #12520

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 4, 2015

Collaborator

@Mic92 sorry needs a rebase again :(

Collaborator

tiborvass commented May 4, 2015

@Mic92 sorry needs a rebase again :(

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 May 4, 2015

Contributor

@tiborvass rebased

Contributor

Mic92 commented May 4, 2015

@tiborvass rebased

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 4, 2015

Collaborator

wow thanks for your reactiveness, really appreciated! Hope to get this in sooner rather than later!

Collaborator

tiborvass commented May 4, 2015

wow thanks for your reactiveness, really appreciated! Hope to get this in sooner rather than later!

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz May 8, 2015

Contributor

oh man so sorry needss another rebase we are so close :)

Contributor

jessfraz commented May 8, 2015

oh man so sorry needss another rebase we are so close :)

Mic92 added some commits Sep 3, 2014

Implement Docker on ZFS
Signed-off-by: Arthur Gautier <baloo@gandi.net>
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: revert to NaiveGraphDriver for the moment
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
docs: move zfs.fsname option to storage option section
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: add myself to MAINTAINERS
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: refactor error handling
thanks to @calavera

Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
integration: add variable to set storage options for testing
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: replace c for /proc/mounts parsing with go
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: improve performance by using legacy mounts
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
  a significant amount of data as the number of layer grows)

This ways it can be also ensured that the correct mountpoint is always used.

Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
zfs: retrieve all filesystems on startup at once
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks

Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 May 8, 2015

Contributor

@jfrazelle done

Contributor

Mic92 commented May 8, 2015

@jfrazelle done

tiborvass added a commit that referenced this pull request May 11, 2015

@tiborvass tiborvass merged commit e7568ed into moby:master May 11, 2015

3 checks passed

docker/dco-signed All commits signed
Details
janky Jenkins build Docker-PRs 7581 has succeeded
Details
windows Jenkins build Windows-PRs 4537 has succeeded
Details
@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 11, 2015

Collaborator

LGTM

Collaborator

tiborvass commented May 11, 2015

LGTM

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 11, 2015

Collaborator

@Mic92 thank you so very much for your awesome work and even greater patience!

Collaborator

tiborvass commented May 11, 2015

@Mic92 thank you so very much for your awesome work and even greater patience!

@@ -444,6 +449,17 @@ Currently supported options are:
> daemon with a supported environment.
### Docker execdriver option
Currently supported options of `zfs`:

This comment has been minimized.

@tiborvass

tiborvass May 11, 2015

Collaborator

@Mic92 of course i just realized that this should be in the storagedriver section, not execdriver....

@tiborvass

tiborvass May 11, 2015

Collaborator

@Mic92 of course i just realized that this should be in the storagedriver section, not execdriver....

This comment has been minimized.

@Mic92

Mic92 May 12, 2015

Contributor

Sorry this has happened during rebasing.

@Mic92

Mic92 May 12, 2015

Contributor

Sorry this has happened during rebasing.

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 12, 2015

Collaborator

@moxiegirl @SvenDowideit @fredlf my personal apologies for merging without docs review. The author was very patient. Feel free to make corrections on this PR, I'll take care of them.

Collaborator

tiborvass commented May 12, 2015

@moxiegirl @SvenDowideit @fredlf my personal apologies for merging without docs review. The author was very patient. Feel free to make corrections on this PR, I'll take care of them.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 12, 2015

Member

LOL, just noticed docs wasn't reviewed yet. Will have a look later and add notes

Member

thaJeztah commented May 12, 2015

LOL, just noticed docs wasn't reviewed yet. Will have a look later and add notes

@tiborvass tiborvass assigned tiborvass and unassigned unclejack May 12, 2015

@burke

This comment has been minimized.

Show comment
Hide comment
@burke

burke May 12, 2015

Contributor

Exciting! One thought regarding possible optimization that's not possible on most other drivers:

If, immediately after cloning the snapshot to inherit the parent, you clone another snapshot of that brand-new dataset called "parent", you can later use that parent snapshot in combination with zfs diff to implement the storage-driver Diff interface method.

(i.e. below, the last line efficiently generates a list of added/changed/deleted files between layers):

zfs snapshot parent@whatever
zfs clone parent@whatever child
zfs snapshot child@parent
(import files, run commands, whatever)
zfs diff child@parent child
Contributor

burke commented May 12, 2015

Exciting! One thought regarding possible optimization that's not possible on most other drivers:

If, immediately after cloning the snapshot to inherit the parent, you clone another snapshot of that brand-new dataset called "parent", you can later use that parent snapshot in combination with zfs diff to implement the storage-driver Diff interface method.

(i.e. below, the last line efficiently generates a list of added/changed/deleted files between layers):

zfs snapshot parent@whatever
zfs clone parent@whatever child
zfs snapshot child@parent
(import files, run commands, whatever)
zfs diff child@parent child
@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 May 12, 2015

Contributor

@burke I have actually had implemented this, but unfortunately zfs diff only report changes once per inode.
This breaks horrible if someone use hardlinks:

RUN touch test && ln test test2

because zfs diff will have only report either test or test2 was added but not both.
The workaround would be to scan every changed directories, take the inode number of the files their and compare them with the inode reported by zfs diff. This could be implemented in the future. I don't know if it will faster then the current approach.

Contributor

Mic92 commented May 12, 2015

@burke I have actually had implemented this, but unfortunately zfs diff only report changes once per inode.
This breaks horrible if someone use hardlinks:

RUN touch test && ln test test2

because zfs diff will have only report either test or test2 was added but not both.
The workaround would be to scan every changed directories, take the inode number of the files their and compare them with the inode reported by zfs diff. This could be implemented in the future. I don't know if it will faster then the current approach.

@Mic92 Mic92 deleted the Mic92:features/zfs-rebased branch May 12, 2015

@Mic92

This comment has been minimized.

Show comment
Hide comment
@Mic92

Mic92 May 12, 2015

Contributor

@discordianfish so I will make a pull request once you have go through the whole documentation.

Contributor

Mic92 commented May 12, 2015

@discordianfish so I will make a pull request once you have go through the whole documentation.

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass May 12, 2015

Collaborator

@Mic92 #dontusehardlinks :P

Collaborator

tiborvass commented May 12, 2015

@Mic92 #dontusehardlinks :P

@burke

This comment has been minimized.

Show comment
Hide comment
@burke

burke May 12, 2015

Contributor

Ohh, interesting. That's really unfortunate behaviour. Too bad.

Contributor

burke commented May 12, 2015

Ohh, interesting. That's really unfortunate behaviour. Too bad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment