Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FUSE volumes #7890

Open
rsokolowski opened this issue May 7, 2015 · 77 comments
Open

FUSE volumes #7890

rsokolowski opened this issue May 7, 2015 · 77 comments

Comments

@rsokolowski
Copy link
Contributor

@rsokolowski rsokolowski commented May 7, 2015

I have an use-case where I would like to mount Google Cloud Storage (GCS) bucket (and a directory in that bucket) in my container and use it as a regular FS. Currently, it seems doable using s3fs-fuse (https://github.com/s3fs-fuse/s3fs-fuse) - thanks @brendanburns. It would be great if GCS was supported as a first class Volume in Kubernetes.

@zmerlynn
Copy link
Member

@zmerlynn zmerlynn commented Oct 19, 2015

This is also possible now through the https://github.com/GoogleCloudPlatform/gcsfuse project, it looks like.

@rboyd
Copy link

@rboyd rboyd commented Oct 19, 2015

@zmerlynn were you able to get gcsfuse to work? I'm expecting this won't work until 1.1 release http://stackoverflow.com/questions/31124368/allow-privileged-containers-in-kubernetes-on-google-container-gke

@zmerlynn
Copy link
Member

@zmerlynn zmerlynn commented Oct 19, 2015

I haven't tried yet. I was merely noting that looked like it may be an option over s3fs-fuse for GCS, which would require the same privileges, presumably.

@thockin
Copy link
Member

@thockin thockin commented Oct 20, 2015

We'll want to decide how we want to handle FUSE mounts in general - there
are potentially a LOT of neat things we can do, but FUSE is (historically)
known to be less than 100% reliable. The simplest is that we push it all
into the user's space and require privileges. Perhaps there are more
interesting ways to manage it?

On Mon, Oct 19, 2015 at 4:27 PM, Zach Loafman notifications@github.com
wrote:

I haven't tried yet. I was merely noting that looked like it may be an
option over s3fs-fuse for GCS, which would require the same privileges,
presumably.


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

@nickschuch nickschuch commented Dec 14, 2015

Im about halfway through coding a volume for https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs implementation while also drawing some inspiration from #17221. Is this something people can see as a viable solution?

@thockin
Copy link
Member

@thockin thockin commented Dec 14, 2015

I'd like to see us shake out a solid design for FUSE-based volumes BEFORE
we argue about the merit of any one volume.

On Sun, Dec 13, 2015 at 5:01 PM, Nick Schuch notifications@github.com
wrote:

Im about halfway through coding a volume for
https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs
implementation while also drawing some inspiration from #17221
#17221. Is this something
people can see as a viable solution?


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

@nickschuch nickschuch commented Dec 14, 2015

No argurments from me :) How can I help?

@thockin
Copy link
Member

@thockin thockin commented Dec 14, 2015

ideate :)

On Mon, Dec 14, 2015 at 1:45 PM, Nick Schuch notifications@github.com
wrote:

No argurments from me :) How can I help?


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

@nickschuch nickschuch commented Dec 14, 2015

My main goal was to defer the fuse implementations to packages on the host and then mount them just like the other volumes eg. NFS.

Maybe we could make a higher level volume which took properties similar to an fstab/mount? That way users are free to use there own mount implementations and we are just using those. That would cut down on duplication of writing multiple volumes with the same scaffolding, as well as support gcsfuse, s3fs-fuse, azure files etc.... Essentially, if you can mount it, we can run it.

@nickschuch
Copy link

@nickschuch nickschuch commented Dec 17, 2015

Hmm, scratch that, that was a very raw thought, I see now we pretty much have that via the "mount" package and volumes provide a higher level.

Currently updated my goal to creating a "fuse" volume, going to write some code and see what other thoughts come from there. That will allow us to also mount the other fuse filesystems.

@pnovotnak
Copy link
Contributor

@pnovotnak pnovotnak commented Jan 22, 2016

I just wanted to chime in and say that this would be a huge boon when running on GCE. Right now I'm looking into storage options for the company I work for... There are a number of inferior options but this would be by far the best for our case.

@thockin
Copy link
Member

@thockin thockin commented Jan 22, 2016

@kubernetes/sig-storage We discussed FUSE a bit recently and worked out a basic model for it, but it is somewhat complicated to do correctly.

Some notes:

We need a FUSE daemon per volume (maybe we can find a way to flatten to per-pod, but not sure that is ALWAYS ok)

FUSE daemons need privileges

This FUSE daemon must run in the pod’s cgroups and net namespace (chargeback), but must NOT run in the pod’s IPC or PID namespace (for security)

It must be killed when the pod terminates.

We need a way to report this container:

  • add it to pod.spec.containers?
  • pod.status.adminContainers?
    • bad - what image was run?  can we recreate this if it were lost (the status litmus test)

A less-perfect alternative might be to run GCS FUSE on every Google VM and treat it as a special form of hostPath. I don't really want to special case it, though, so it's somewhat less attractive.

@bgrant0607 bgrant0607 changed the title Mounting Google Cloud Storage into a container FUSE volumes May 18, 2016
@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented May 18, 2016

Related: #831

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented May 18, 2016

FUSE is mainly needed for writes? Otherwise, it seems simpler to just fetch a tar.gz and unpack it into an emptyDir.

@jefflaplante
Copy link

@jefflaplante jefflaplante commented Oct 26, 2016

Has anyone worked on this further? I would like to be able mount FUSE volumes like other PersistentVolumes in kubernetes as an alternative to NFS or glusterfs for multi-container Read-write.

@Stono
Copy link

@Stono Stono commented Mar 1, 2017

For those still playing with this I ended up using the preStop and postStart lifecycle hooks and running the fuse command, which results in a very similar behaviour.

https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/

It'd be awesome however to have the ability to do fuse mounts as PersistentVolumes in Kubernetes.

@baracoder
Copy link

@baracoder baracoder commented Mar 24, 2017

I have a similar issue. I would like to use ceph-fuse because of requent problems with the kernel driver.

Using life cycle hooks is a valid workaround but the issue is still valid because the docker images need to be modified this way. Is it possible to use privileged init containers for this somehow?

@sunshinekitty
Copy link

@sunshinekitty sunshinekitty commented Apr 4, 2017

+1

@nickschuch
Copy link

@nickschuch nickschuch commented Apr 4, 2017

We managed to get this going at a flexvolume, Ill look at what it will take for us to publish to code (bit rough, but demonstrates the point).

@matesitox
Copy link

@matesitox matesitox commented Apr 18, 2017

+1

1 similar comment
@davidberardozzi
Copy link

@davidberardozzi davidberardozzi commented Apr 21, 2017

+1

@ghost
Copy link

@ghost ghost commented Apr 22, 2017

damn, two years and no interest from google in his.

@maxekman
Copy link

@maxekman maxekman commented Apr 22, 2017

+1, this would indeed be a very useful feature. We have a current use case for storing DB backups directly in GCS for example.

@danielqsj
Copy link
Member

@danielqsj danielqsj commented May 9, 2017

+1 for ceph-fuse

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Aug 21, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@halcyondude
Copy link

@halcyondude halcyondude commented Aug 28, 2019

/remove-lifecycle stale

@halcyondude
Copy link

@halcyondude halcyondude commented Aug 28, 2019

Having the basic capability (mount a bucket into a pod) seems like a no brainer. We're in the middle of implementing either lifecycle rules that mount using gcsfuse and unmount, or running an NFS server in the cluster backed by the bucket. Both are suboptimal. Would ideally like to not see this issue age off as it's a real scenario that would make sense to support.

@spacebel
Copy link

@spacebel spacebel commented Aug 28, 2019

Just a note for all participant to this thread: I had based an entire system on a GS bucket for the implementation of a Data Store of files...

We now run away from this technology because mouting through GCSFuse a bucket as a file system is totally not reliable. Sometimes files are not found, sometime the number of request sent to the API exceeds the limits, sometimes the read/write is absolutely slow.

We have moved to NFS.

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Nov 26, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@MattMS
Copy link

@MattMS MattMS commented Nov 26, 2019

111 people want this (and 11 ❤️ it), so I think the following is necessary:

/remove-lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Feb 24, 2020

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@Sturgelose
Copy link

@Sturgelose Sturgelose commented Feb 24, 2020

/remove-lifecycle stale

Just go to check comment #7890 (comment), though now it has increased in likes and love :D

@jalberti
Copy link

@jalberti jalberti commented Apr 2, 2020

@thockin I just came across #7890 (comment) again, and want to remark that FUSE daemons do not require privileges anymore, since http://lkml.iu.edu/hypermail/linux/kernel/1806.0/04385.html, though they need unshare and mount, which by default, at least for docker, is blacklisted in the default seccomp policy but with a custom seccomp policy can be made available also for unprivileged containers. Knowing this, is there a way to make this FUSE discussion a priority? Unfortunately I have not found a way to get the fuse device into my unprivileged container, even with custom seccomp policies, when using K8S. Getting the host device itself into the container seems to be the only blocker.

@TomOperator
Copy link

@TomOperator TomOperator commented May 15, 2020

@thockin would the new FUSE update simplify the objective's model?

I'd like to see this prioritised and would be willing to contribute.

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Aug 13, 2020

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@Ark-kun
Copy link

@Ark-kun Ark-kun commented Aug 29, 2020

FUSE volumes might finally solve the issue with Kubernetes' lack of read-write-many volumes.

@dio-trinny
Copy link

@dio-trinny dio-trinny commented Sep 9, 2020

defo also need this here... running a container as privileged might as well be running it on the VM itself!

@TomOperator
Copy link

@TomOperator TomOperator commented Sep 9, 2020

This is completely possible with the last couple updates Fuse released - I'll start work on this next week.

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Oct 9, 2020

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@Bobgy
Copy link

@Bobgy Bobgy commented Oct 12, 2020

/remove-lifecycle rotten

@baurmatt
Copy link

@baurmatt baurmatt commented Oct 19, 2020

Hey @TomOperator, did you start working on this?

We'd love to see this getting implemented! Btw. this is currently the 10th most up voted issue in this project! :) Keep the up votes coming ;)

@scottweitzner
Copy link

@scottweitzner scottweitzner commented Dec 11, 2020

+1 This would be an incredibly helpful feature for my team. Any updates?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.