New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FUSE volumes #7890
Comments
This is also possible now through the https://github.com/GoogleCloudPlatform/gcsfuse project, it looks like. |
@zmerlynn were you able to get gcsfuse to work? I'm expecting this won't work until 1.1 release http://stackoverflow.com/questions/31124368/allow-privileged-containers-in-kubernetes-on-google-container-gke |
I haven't tried yet. I was merely noting that looked like it may be an option over |
We'll want to decide how we want to handle FUSE mounts in general - there On Mon, Oct 19, 2015 at 4:27 PM, Zach Loafman notifications@github.com
|
Im about halfway through coding a volume for https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs implementation while also drawing some inspiration from #17221. Is this something people can see as a viable solution? |
I'd like to see us shake out a solid design for FUSE-based volumes BEFORE On Sun, Dec 13, 2015 at 5:01 PM, Nick Schuch notifications@github.com
|
No argurments from me :) How can I help? |
ideate :) On Mon, Dec 14, 2015 at 1:45 PM, Nick Schuch notifications@github.com
|
My main goal was to defer the fuse implementations to packages on the host and then mount them just like the other volumes eg. NFS. Maybe we could make a higher level volume which took properties similar to an fstab/mount? That way users are free to use there own mount implementations and we are just using those. That would cut down on duplication of writing multiple volumes with the same scaffolding, as well as support gcsfuse, s3fs-fuse, azure files etc.... Essentially, if you can mount it, we can run it. |
Hmm, scratch that, that was a very raw thought, I see now we pretty much have that via the "mount" package and volumes provide a higher level. Currently updated my goal to creating a "fuse" volume, going to write some code and see what other thoughts come from there. That will allow us to also mount the other fuse filesystems. |
I just wanted to chime in and say that this would be a huge boon when running on GCE. Right now I'm looking into storage options for the company I work for... There are a number of inferior options but this would be by far the best for our case. |
@kubernetes/sig-storage We discussed FUSE a bit recently and worked out a basic model for it, but it is somewhat complicated to do correctly. Some notes: We need a FUSE daemon per volume (maybe we can find a way to flatten to per-pod, but not sure that is ALWAYS ok) FUSE daemons need privileges This FUSE daemon must run in the pod’s cgroups and net namespace (chargeback), but must NOT run in the pod’s IPC or PID namespace (for security) It must be killed when the pod terminates. We need a way to report this container:
A less-perfect alternative might be to run GCS FUSE on every Google VM and treat it as a special form of hostPath. I don't really want to special case it, though, so it's somewhat less attractive. |
Related: #831 |
FUSE is mainly needed for writes? Otherwise, it seems simpler to just fetch a tar.gz and unpack it into an emptyDir. |
Has anyone worked on this further? I would like to be able mount FUSE volumes like other PersistentVolumes in kubernetes as an alternative to NFS or glusterfs for multi-container Read-write. |
For those still playing with this I ended up using the https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/ It'd be awesome however to have the ability to do fuse mounts as PersistentVolumes in Kubernetes. |
I have a similar issue. I would like to use ceph-fuse because of requent problems with the kernel driver. Using life cycle hooks is a valid workaround but the issue is still valid because the docker images need to be modified this way. Is it possible to use privileged init containers for this somehow? |
+1 |
We managed to get this going at a flexvolume, Ill look at what it will take for us to publish to code (bit rough, but demonstrates the point). |
+1 |
1 similar comment
+1 |
damn, two years and no interest from google in his. |
+1, this would indeed be a very useful feature. We have a current use case for storing DB backups directly in GCS for example. |
+1 for ceph-fuse |
Currently in 2023, what is the best way to mount a gcp bucket in a pod in kubernetes without privileged mode? |
We're using https://cloud.google.com/storage/docs/gcs-fuse for several years now Mounting is done via a side-car container pattern in k8s |
Do you have a sample on how to use it? Is there a inbuilt support to mount google buckets like NFS? |
And there is a CSI Driver for it : https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver |
@rsokolowski Can you use this CSI driver? https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver |
@rsokolowski I'm going to close this issue since it is already implemented. Please feel free to re-open it is still an issue. |
@xing-yang: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@xing-yang The suggested solution is only valid for GCP customers. As those are only a minority, could we please reopen this issue to get a generic implementation for FUSE devices? There a quite a lot more usecases described in this issue than a GCS bucket mount. |
Indeed. Kubernetes should work towards providing support for creating arbitrary FUSE mounts inside of pods. Only supporting the creation of certain types of FUSE mounts through volumes/volumeMounts is insufficient. I am the author of Buildbarn, which is a distributed build cluster for Bazel. Buildbarn has an integrated FUSE file system to speed up execution of build actions, by performing lazy loading of input files. What makes it hard to use right now is this issue that @KyleSanderson reported further up:
Right now I need to tell all of my users that they need to place the FUSE mount inside of a hostPath instead of an emptyDir, because using an emptyDir causes your Kubernetes nodes to effectively brick themselves, due to pod teardown getting stuck. In my opinion this shouldn't be necessary. |
/reopen |
@xing-yang: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Big change: use bb-worker's FUSE filesystem for runner build directory. Requires worker to be a privileged container, so it can set up and share the FUSE mount point to runner container. Runner remains unprivileged, however (good). Note FUSE mountpoint can't be in an emptyDir, and needs to be on a hostPath filesystem. See github comment and surrounding discussion: kubernetes/kubernetes#7890 (comment)
Please follow up with discussions in this issue: #70013 |
That issue is not the same as this one. |
We develop meta-fuse-csi-plugin, which is a dedicated CSI driver for any FUSE implementations. It is now in the early development stage but it can run and mount some fuse impls (e.g. mountpoint-s3) in unprivileged pods. Any comments and suggestions are welcome! |
@msau42 should we close this? It's clearly possible to write FUSE drivers now.. |
No, we can't. This isn't necessarily about being able to write CSI drivers that use FUSE. It is about being able to safely create FUSE mounts inside of pods without causing Kubelet to hang upon pod teardown. I am the maintainer of a distributed build cluster named Buildbarn. Buildbarn can perform builds inside of a FUSE mount that lazily loads input files. This speeds up builds significantly, as less time it spent downloading files from storage. We are currently affected by this issue. It is NOT a solution for us to run Buildbarn as a CSI driver, because it simply is not a storage driver. It's just an application that happens to make use of FUSE. |
Right. Should we retitle this bug to "Unable to provide FUSE mounts from a pod safely"? The original comment asks for a GCS volume type, which we're clearly no longer talking about. |
Not really - it's more general than that which is why it's both frustrating and kind of hilarious this is still here... If you have a single mount in a pod, that does not get unmounted at shutdown (which there's no way to do in k8s), the pod teardown will continue to fail until the mount ends. It seems safe to just unmount anything in the same filesystem structure as the containers running. The alternative is to (finally) allow containers to run commands on shutdown. |
Retitle SGTM |
This issue is a bit confusing to undrestand as we have discussed many different use cases and issues and over the past few years, some have been solved and some have not. I think it would be better to create a new bug that summarizes what the current challenges are, as now there are multiple fuse-based csi drivers being used in production successfully. |
This is a very long issue - opening a new issue with a recap of the current situation could help, or else if someone can write a nice recap as a comment, I can edit this issue's title anf first comment to that. Specifically, if I understand correctly -- a FUSE mount inside a pod prevents pod teardown because the FUSE-implementing daemon dies before the volume is unmounted? Can someone who knows it well expand on the sequences and errors? Is a process stuck in D or something else? How does the failure actually manifest? |
I have an use-case where I would like to mount Google Cloud Storage (GCS) bucket (and a directory in that bucket) in my container and use it as a regular FS. Currently, it seems doable using s3fs-fuse (https://github.com/s3fs-fuse/s3fs-fuse) - thanks @brendanburns. It would be great if GCS was supported as a first class Volume in Kubernetes.
The text was updated successfully, but these errors were encountered: