Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FUSE volumes #7890

Open
rsokolowski opened this issue May 7, 2015 · 139 comments
Open

FUSE volumes #7890

rsokolowski opened this issue May 7, 2015 · 139 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@rsokolowski
Copy link
Contributor

I have an use-case where I would like to mount Google Cloud Storage (GCS) bucket (and a directory in that bucket) in my container and use it as a regular FS. Currently, it seems doable using s3fs-fuse (https://github.com/s3fs-fuse/s3fs-fuse) - thanks @brendanburns. It would be great if GCS was supported as a first class Volume in Kubernetes.

@roberthbailey roberthbailey added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. team/cluster labels May 7, 2015
@zmerlynn
Copy link
Member

This is also possible now through the https://github.com/GoogleCloudPlatform/gcsfuse project, it looks like.

@rboyd
Copy link

rboyd commented Oct 19, 2015

@zmerlynn were you able to get gcsfuse to work? I'm expecting this won't work until 1.1 release http://stackoverflow.com/questions/31124368/allow-privileged-containers-in-kubernetes-on-google-container-gke

@zmerlynn
Copy link
Member

I haven't tried yet. I was merely noting that looked like it may be an option over s3fs-fuse for GCS, which would require the same privileges, presumably.

@thockin
Copy link
Member

thockin commented Oct 20, 2015

We'll want to decide how we want to handle FUSE mounts in general - there
are potentially a LOT of neat things we can do, but FUSE is (historically)
known to be less than 100% reliable. The simplest is that we push it all
into the user's space and require privileges. Perhaps there are more
interesting ways to manage it?

On Mon, Oct 19, 2015 at 4:27 PM, Zach Loafman notifications@github.com
wrote:

I haven't tried yet. I was merely noting that looked like it may be an
option over s3fs-fuse for GCS, which would require the same privileges,
presumably.


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

Im about halfway through coding a volume for https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs implementation while also drawing some inspiration from #17221. Is this something people can see as a viable solution?

@thockin
Copy link
Member

thockin commented Dec 14, 2015

I'd like to see us shake out a solid design for FUSE-based volumes BEFORE
we argue about the merit of any one volume.

On Sun, Dec 13, 2015 at 5:01 PM, Nick Schuch notifications@github.com
wrote:

Im about halfway through coding a volume for
https://github.com/s3fs-fuse/s3fs-fuse, based it off the nfs
implementation while also drawing some inspiration from #17221
#17221. Is this something
people can see as a viable solution?


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

No argurments from me :) How can I help?

@thockin
Copy link
Member

thockin commented Dec 14, 2015

ideate :)

On Mon, Dec 14, 2015 at 1:45 PM, Nick Schuch notifications@github.com
wrote:

No argurments from me :) How can I help?


Reply to this email directly or view it on GitHub
#7890 (comment)
.

@nickschuch
Copy link

My main goal was to defer the fuse implementations to packages on the host and then mount them just like the other volumes eg. NFS.

Maybe we could make a higher level volume which took properties similar to an fstab/mount? That way users are free to use there own mount implementations and we are just using those. That would cut down on duplication of writing multiple volumes with the same scaffolding, as well as support gcsfuse, s3fs-fuse, azure files etc.... Essentially, if you can mount it, we can run it.

@nickschuch
Copy link

Hmm, scratch that, that was a very raw thought, I see now we pretty much have that via the "mount" package and volumes provide a higher level.

Currently updated my goal to creating a "fuse" volume, going to write some code and see what other thoughts come from there. That will allow us to also mount the other fuse filesystems.

@pnovotnak
Copy link

I just wanted to chime in and say that this would be a huge boon when running on GCE. Right now I'm looking into storage options for the company I work for... There are a number of inferior options but this would be by far the best for our case.

@thockin
Copy link
Member

thockin commented Jan 22, 2016

@kubernetes/sig-storage We discussed FUSE a bit recently and worked out a basic model for it, but it is somewhat complicated to do correctly.

Some notes:

We need a FUSE daemon per volume (maybe we can find a way to flatten to per-pod, but not sure that is ALWAYS ok)

FUSE daemons need privileges

This FUSE daemon must run in the pod’s cgroups and net namespace (chargeback), but must NOT run in the pod’s IPC or PID namespace (for security)

It must be killed when the pod terminates.

We need a way to report this container:

  • add it to pod.spec.containers?
  • pod.status.adminContainers?
    • bad - what image was run?  can we recreate this if it were lost (the status litmus test)

A less-perfect alternative might be to run GCS FUSE on every Google VM and treat it as a special form of hostPath. I don't really want to special case it, though, so it's somewhat less attractive.

@bgrant0607 bgrant0607 changed the title Mounting Google Cloud Storage into a container FUSE volumes May 18, 2016
@bgrant0607 bgrant0607 added sig/storage Categorizes an issue or PR as relevant to SIG Storage. kind/feature Categorizes issue or PR as related to a new feature. labels May 18, 2016
@bgrant0607
Copy link
Member

Related: #831

@bgrant0607
Copy link
Member

FUSE is mainly needed for writes? Otherwise, it seems simpler to just fetch a tar.gz and unpack it into an emptyDir.

@jefflaplante
Copy link

Has anyone worked on this further? I would like to be able mount FUSE volumes like other PersistentVolumes in kubernetes as an alternative to NFS or glusterfs for multi-container Read-write.

@Stono
Copy link

Stono commented Mar 1, 2017

For those still playing with this I ended up using the preStop and postStart lifecycle hooks and running the fuse command, which results in a very similar behaviour.

https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/

It'd be awesome however to have the ability to do fuse mounts as PersistentVolumes in Kubernetes.

@baracoder
Copy link

I have a similar issue. I would like to use ceph-fuse because of requent problems with the kernel driver.

Using life cycle hooks is a valid workaround but the issue is still valid because the docker images need to be modified this way. Is it possible to use privileged init containers for this somehow?

@sunshinekitty
Copy link

+1

@nickschuch
Copy link

We managed to get this going at a flexvolume, Ill look at what it will take for us to publish to code (bit rough, but demonstrates the point).

@matesitox
Copy link

+1

1 similar comment
@davidberardozzi
Copy link

+1

@ghost
Copy link

ghost commented Apr 22, 2017

damn, two years and no interest from google in his.

@maxekman
Copy link

+1, this would indeed be a very useful feature. We have a current use case for storing DB backups directly in GCS for example.

@danielqsj
Copy link
Contributor

+1 for ceph-fuse

@mehulparmariitr
Copy link

mehulparmariitr commented Mar 7, 2023

Currently in 2023, what is the best way to mount a gcp bucket in a pod in kubernetes without privileged mode?

@gmile
Copy link
Contributor

gmile commented Mar 7, 2023

Currently in 2023, what is the best way to mount a gcp bucket in a pod in kubernetes without privileged mode?

We're using https://cloud.google.com/storage/docs/gcs-fuse for several years now

Mounting is done via a side-car container pattern in k8s

@mehulparmariitr
Copy link

mehulparmariitr commented Mar 7, 2023

Currently in 2023, what is the best way to mount a gcp bucket in a pod in kubernetes without privileged mode?

We're using https://cloud.google.com/storage/docs/gcs-fuse for several years now

Mounting is done via a side-car container pattern in k8s

Do you have a sample on how to use it? Is there a inbuilt support to mount google buckets like NFS?

@coulof
Copy link

coulof commented May 12, 2023

And there is a CSI Driver for it : https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver

@xing-yang
Copy link
Contributor

@xing-yang
Copy link
Contributor

@rsokolowski I'm going to close this issue since it is already implemented. Please feel free to re-open it is still an issue.
/close

@k8s-ci-robot
Copy link
Contributor

@xing-yang: Closing this issue.

In response to this:

@rsokolowski I'm going to close this issue since it is already implemented. Please feel free to re-open it is still an issue.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@baurmatt
Copy link

@xing-yang The suggested solution is only valid for GCP customers. As those are only a minority, could we please reopen this issue to get a generic implementation for FUSE devices? There a quite a lot more usecases described in this issue than a GCS bucket mount.

@EdSchouten
Copy link
Contributor

The suggested solution is only valid for GCP customers. As those are only a minority, could we please reopen this issue to get a generic implementation for FUSE devices? There a quite a lot more usecases described in this issue than a GCS bucket mount.

Indeed. Kubernetes should work towards providing support for creating arbitrary FUSE mounts inside of pods. Only supporting the creation of certain types of FUSE mounts through volumes/volumeMounts is insufficient.

I am the author of Buildbarn, which is a distributed build cluster for Bazel. Buildbarn has an integrated FUSE file system to speed up execution of build actions, by performing lazy loading of input files. What makes it hard to use right now is this issue that @KyleSanderson reported further up:

so, you could always do this by passing /dev/fuse to a sidecar, and then using EmptyDir to serve the shared mount within it. The problem is when the containers terminate EmptyDir has a bug where it stays around forever because the endpoint is disconnected...

Right now I need to tell all of my users that they need to place the FUSE mount inside of a hostPath instead of an emptyDir, because using an emptyDir causes your Kubernetes nodes to effectively brick themselves, due to pod teardown getting stuck. In my opinion this shouldn't be necessary.

@xing-yang
Copy link
Contributor

/reopen

@k8s-ci-robot k8s-ci-robot reopened this May 31, 2023
@k8s-ci-robot
Copy link
Contributor

@xing-yang: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 31, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

anguslees added a commit to anguslees/bb-deployments that referenced this issue Jul 4, 2023
Big change: use bb-worker's FUSE filesystem for runner build directory.

Requires worker to be a privileged container, so it can set up and share the FUSE
mount point to runner container.  Runner remains unprivileged, however (good).

Note FUSE mountpoint can't be in an emptyDir, and needs to be on a hostPath
filesystem.  See github comment and surrounding discussion:
kubernetes/kubernetes#7890 (comment)
@xing-yang
Copy link
Contributor

Please follow up with discussions in this issue: #70013

@EdSchouten
Copy link
Contributor

Please follow up with discussions in this issue: #70013

That issue is not the same as this one.

@naoki9911
Copy link

naoki9911 commented Nov 22, 2023

We develop meta-fuse-csi-plugin, which is a dedicated CSI driver for any FUSE implementations.
https://github.com/pfnet-research/meta-fuse-csi-plugin

It is now in the early development stage but it can run and mount some fuse impls (e.g. mountpoint-s3) in unprivileged pods.
Detail is described in this blog.
https://tech.preferred.jp/en/blog/meta-fuse-csi-plugin/

Any comments and suggestions are welcome!

@thockin
Copy link
Member

thockin commented Feb 14, 2024

@msau42 should we close this? It's clearly possible to write FUSE drivers now..

@EdSchouten
Copy link
Contributor

EdSchouten commented Feb 14, 2024

No, we can't.

This isn't necessarily about being able to write CSI drivers that use FUSE. It is about being able to safely create FUSE mounts inside of pods without causing Kubelet to hang upon pod teardown.

I am the maintainer of a distributed build cluster named Buildbarn. Buildbarn can perform builds inside of a FUSE mount that lazily loads input files. This speeds up builds significantly, as less time it spent downloading files from storage.

We are currently affected by this issue. It is NOT a solution for us to run Buildbarn as a CSI driver, because it simply is not a storage driver. It's just an application that happens to make use of FUSE.

@anguslees
Copy link
Member

It is about being able to safely create FUSE mounts inside of pods without causing Kubelet to hang upon pod teardown.

Right. Should we retitle this bug to "Unable to provide FUSE mounts from a pod safely"?

The original comment asks for a GCS volume type, which we're clearly no longer talking about.

@KyleSanderson
Copy link

It is about being able to safely create FUSE mounts inside of pods without causing Kubelet to hang upon pod teardown.

Right. Should we retitle this bug to "Unable to provide FUSE mounts from a pod safely"?

The original comment asks for a GCS volume type, which we're clearly no longer talking about.

Not really - it's more general than that which is why it's both frustrating and kind of hilarious this is still here... If you have a single mount in a pod, that does not get unmounted at shutdown (which there's no way to do in k8s), the pod teardown will continue to fail until the mount ends. It seems safe to just unmount anything in the same filesystem structure as the containers running. The alternative is to (finally) allow containers to run commands on shutdown.

@thockin
Copy link
Member

thockin commented Mar 14, 2024

Retitle SGTM

@msau42
Copy link
Member

msau42 commented Mar 18, 2024

This issue is a bit confusing to undrestand as we have discussed many different use cases and issues and over the past few years, some have been solved and some have not. I think it would be better to create a new bug that summarizes what the current challenges are, as now there are multiple fuse-based csi drivers being used in production successfully.

@thockin
Copy link
Member

thockin commented Mar 26, 2024

This is a very long issue - opening a new issue with a recap of the current situation could help, or else if someone can write a nice recap as a comment, I can edit this issue's title anf first comment to that.

Specifically, if I understand correctly -- a FUSE mount inside a pod prevents pod teardown because the FUSE-implementing daemon dies before the volume is unmounted? Can someone who knows it well expand on the sequences and errors? Is a process stuck in D or something else? How does the failure actually manifest?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests