-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remote filesystem snapshotter #2943
Comments
Thanks @Random-Liu I'm adding some comments on that document. Also I think there were some issues that @dmcgowan wanted to track on this issue as well (perhaps the order refactoring) |
@Random-Liu @dmcgowan @lukasheinrich would you and other folks like to have a call about this at some point? This is very similar to something I've been working on for quite some time so it would be nice to formalize the proposal to the point it can be agreed upon and consolidate efforts. |
Absolutely, I‘m happy to have a call.
…On Thu, Jan 31, 2019 at 8:43 PM Eric Hotinger ***@***.***> wrote:
@Random-Liu <https://github.com/Random-Liu> @dmcgowan
<https://github.com/dmcgowan> @lukasheinrich
<https://github.com/lukasheinrich> would you and other folks like to have
a call about this at some point? This is very similar to something I've
been working on for quite some time so it would be nice to formalize the
proposal to the point it can be agreed upon and consolidate efforts.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2943 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACNfA7sSuiypGkCKlvRIfn1BKM1JbLV8ks5vI0dEgaJpZM4aNfn->
.
|
@ehotinger @lukasheinrich would like to hop on that call as well |
@ehotinger @lukasheinrich I would love to join as well. |
@Random-Liu would you be able to join one as well? |
@lukasheinrich Sorry for the late reply. Yes, I would like to join. And I had a discussion with @dmcgowan last week. He summarized that discussion in the issue #2968. :) |
I'd like to join too if possible. |
I'd be interested in joining a call as well. |
@afortiorama @samuelkarp @Random-Liu @siscia @ehotinger should we try to schedule something (doodle?) can post a public link here |
@dmcgowan should be there as well if possible. |
This is exactly what we have to do for Linux Containers on Windows support. If we have a call please add @jterry75, @jhowardmsft to it. |
I created this doodle for next 3 wks (replies will not be public in case you're not comfortable sharing availability). Not sure what the TZ distribution is among the participants https://doodle.com/poll/az6hcgymdqqka38h if you leave your email as a comment, we can follow up that way as well |
@lukasheinrich I would love to join the call as well. |
Hi all, March 20 / 21 seems to work for most people. Pinging @Random-Liu @dmcgowan to see if that would work for them. |
Hi @lukasheinrich , I have been working on a similar problem to make cooperative pull support among multiple hosts. The proposed idea includes a remote snapshotter (like cvmfs) and a distributed key-value store as metadata. Currently metadata is backed by local boltDB. The goal is to create a metadata plugin as well, that can be backed by distributed key-value store (like etcd) and is accessible by all hosts. When the unpacked tars are stored in cvmfs, the information/mapping about those tars can be stored in metadata. When the image is pulled on a different node, the digest/manifest sha etc is first checked against metadata to decide whether to pull the image again or not. Have created a doc with more details: https://docs.google.com/document/d/10WHybp5bv9Pl9sxhy-n83lvFdovmQ-Q_10uBmiZTEcs/edit?usp=sharing |
We would also like this for https://github.com/google/crfs |
Currently we are using the CVMFS graph driver plugin for Docker but it would be very nice indeed to have upstream support that enables image distribution via a caching and deduplicating filesystem that pulls content on demand. |
Among container's lifecycle, pulling image is one of the biggest performance bottleneck on container startup processes. One research shows that time for pulling accounts for 76% of container startup time[FAST '16](https://www.usenix.org/node/194431). Remote snapshotter is one of the solutions of the issue. And discussed on some issue threads(containerd#2943, etc.). This implementation partially based on the discussion but slightly different to make it work with metadata snapshotter which binds each snapshots to namespaces. Signed-off-by: Kohei Tokunaga <kouhei.tokunaga.uy@hco.ntt.co.jp>
This is an issue to track / follow up on a call re: file-level image distribution through remote filesystems.
Slides for CERN use-case shown during meeting: https://docs.google.com/presentation/d/1DJlRV9a445567EyRa265uemWv5zoDQ4o1CK-ZszpFLE/edit?usp=sharing
The goal is to support exploiting the existence of unpacked layers on remote filesystems (FUSE mounted, possibly read-only) to reduce the amount of data transferred during image pull. A candidate filesystem could be CVMFS (CERN VM Filesystem: https://github.com/cvmfs/cvmfs)
The current approach in containerd has an ordering where
which may be reversed such that
Open Question:
Related Issues and PRs:
#2267
#2697
#2391
#2467
Tagging the people present in the discussion: @dmcgowan @Random-Liu @rochaporto @jblomer @estesp @fuweid @crosbymichael @mikebrow @clelange @afortiorama
The text was updated successfully, but these errors were encountered: