FUSE volumes #7890
FUSE volumes #7890
Comments
This is also possible now through the https://github.com/GoogleCloudPlatform/gcsfuse project, it looks like. |
@zmerlynn were you able to get gcsfuse to work? I'm expecting this won't work until 1.1 release http://stackoverflow.com/questions/31124368/allow-privileged-containers-in-kubernetes-on-google-container-gke |
I haven't tried yet. I was merely noting that looked like it may be an option over |
We'll want to decide how we want to handle FUSE mounts in general - there On Mon, Oct 19, 2015 at 4:27 PM, Zach Loafman notifications@github.com
|
Im about halfway through coding a volume for https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs implementation while also drawing some inspiration from #17221. Is this something people can see as a viable solution? |
I'd like to see us shake out a solid design for FUSE-based volumes BEFORE On Sun, Dec 13, 2015 at 5:01 PM, Nick Schuch notifications@github.com
|
No argurments from me :) How can I help? |
ideate :) On Mon, Dec 14, 2015 at 1:45 PM, Nick Schuch notifications@github.com
|
My main goal was to defer the fuse implementations to packages on the host and then mount them just like the other volumes eg. NFS. Maybe we could make a higher level volume which took properties similar to an fstab/mount? That way users are free to use there own mount implementations and we are just using those. That would cut down on duplication of writing multiple volumes with the same scaffolding, as well as support gcsfuse, s3fs-fuse, azure files etc.... Essentially, if you can mount it, we can run it. |
Hmm, scratch that, that was a very raw thought, I see now we pretty much have that via the "mount" package and volumes provide a higher level. Currently updated my goal to creating a "fuse" volume, going to write some code and see what other thoughts come from there. That will allow us to also mount the other fuse filesystems. |
I just wanted to chime in and say that this would be a huge boon when running on GCE. Right now I'm looking into storage options for the company I work for... There are a number of inferior options but this would be by far the best for our case. |
@kubernetes/sig-storage We discussed FUSE a bit recently and worked out a basic model for it, but it is somewhat complicated to do correctly. Some notes: We need a FUSE daemon per volume (maybe we can find a way to flatten to per-pod, but not sure that is ALWAYS ok) FUSE daemons need privileges This FUSE daemon must run in the pod’s cgroups and net namespace (chargeback), but must NOT run in the pod’s IPC or PID namespace (for security) It must be killed when the pod terminates. We need a way to report this container:
A less-perfect alternative might be to run GCS FUSE on every Google VM and treat it as a special form of hostPath. I don't really want to special case it, though, so it's somewhat less attractive. |
Related: #831 |
FUSE is mainly needed for writes? Otherwise, it seems simpler to just fetch a tar.gz and unpack it into an emptyDir. |
Has anyone worked on this further? I would like to be able mount FUSE volumes like other PersistentVolumes in kubernetes as an alternative to NFS or glusterfs for multi-container Read-write. |
For those still playing with this I ended up using the https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/ It'd be awesome however to have the ability to do fuse mounts as PersistentVolumes in Kubernetes. |
I have a similar issue. I would like to use ceph-fuse because of requent problems with the kernel driver. Using life cycle hooks is a valid workaround but the issue is still valid because the docker images need to be modified this way. Is it possible to use privileged init containers for this somehow? |
+1 |
We managed to get this going at a flexvolume, Ill look at what it will take for us to publish to code (bit rough, but demonstrates the point). |
+1 |
1 similar comment
+1 |
damn, two years and no interest from google in his. |
+1, this would indeed be a very useful feature. We have a current use case for storing DB backups directly in GCS for example. |
+1 for ceph-fuse |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Having the basic capability (mount a bucket into a pod) seems like a no brainer. We're in the middle of implementing either lifecycle rules that mount using gcsfuse and unmount, or running an NFS server in the cluster backed by the bucket. Both are suboptimal. Would ideally like to not see this issue age off as it's a real scenario that would make sense to support. |
Just a note for all participant to this thread: I had based an entire system on a GS bucket for the implementation of a Data Store of files... We now run away from this technology because mouting through GCSFuse a bucket as a file system is totally not reliable. Sometimes files are not found, sometime the number of request sent to the API exceeds the limits, sometimes the read/write is absolutely slow. We have moved to NFS. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
111 people want this (and 11 /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale Just go to check comment #7890 (comment), though now it has increased in likes and love :D |
@thockin I just came across #7890 (comment) again, and want to remark that FUSE daemons do not require privileges anymore, since http://lkml.iu.edu/hypermail/linux/kernel/1806.0/04385.html, though they need unshare and mount, which by default, at least for docker, is blacklisted in the default seccomp policy but with a custom seccomp policy can be made available also for unprivileged containers. Knowing this, is there a way to make this FUSE discussion a priority? Unfortunately I have not found a way to get the fuse device into my unprivileged container, even with custom seccomp policies, when using K8S. Getting the host device itself into the container seems to be the only blocker. |
@thockin would the new FUSE update simplify the objective's model? I'd like to see this prioritised and would be willing to contribute. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
FUSE volumes might finally solve the issue with Kubernetes' lack of read-write-many volumes. |
defo also need this here... running a container as privileged might as well be running it on the VM itself! |
This is completely possible with the last couple updates Fuse released - I'll start work on this next week. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Hey @TomOperator, did you start working on this? We'd love to see this getting implemented! Btw. this is currently the 10th most up voted issue in this project! :) Keep the up votes coming ;) |
+1 This would be an incredibly helpful feature for my team. Any updates? |
I have an use-case where I would like to mount Google Cloud Storage (GCS) bucket (and a directory in that bucket) in my container and use it as a regular FS. Currently, it seems doable using s3fs-fuse (https://github.com/s3fs-fuse/s3fs-fuse) - thanks @brendanburns. It would be great if GCS was supported as a first class Volume in Kubernetes.
The text was updated successfully, but these errors were encountered: