Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Suggest NFS as default filesystem for local dev #1889

Open
eecue opened this issue Aug 25, 2017 · 31 comments

Comments

Projects
None yet
@eecue
Copy link

commented Aug 25, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please provide the following details:

Environment:

Minikube version: v0.21.0

  • OS: OS X
  • VM Driver: Virtualbox
  • ISO version: minikube-v0.23.0.iso
  • Install tools: brew
  • Others:

What happened:

We noticed our application ran significantly slower in k8s/minikube compared to vagrant. We changed our local files to mount via NFS instead of 9p and our dev system is now much faster than vagrant.

What you expected to happen:

We should update the docs to recommend using NFS for local dev instead of 9p, at least for OS X, but probably makes sense on linux, too. We're going to test on Windows shortly to see if it has the same effect which I assume it will.

How to reproduce it (as minimally and precisely as possible):

Set up a persistent volume store using NFS and use it instead of 9p:

  1. Update your /etc/exports to allow minikube to access it:

$ echo "/Users -alldirs -mapall="$(id -u)":"$(id -g)" 192.168.99.100"| sudo tee -a /etc/exports

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-volume
spec:
  capacity:
    storage: 15Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  nfs:
    server: 192.168.99.1
    path: /Users/Shared/Sites/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cr-volume
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 15Gi

Anything else do we need to know:

If you think this is a good idea, I'll make a PR for the update docs. It's been life changing for our devs.

@eecue

This comment has been minimized.

Copy link
Author

commented Aug 25, 2017

This seems to have fixed #1839 for us.

@alanbrent

This comment has been minimized.

Copy link

commented Aug 28, 2017

I've been doing this for a couple of weeks now, because the 9p driver has reliable data inconsistency. This is the only way I've been able to create a reliably functioning, reasonably performing minikube-based Kubernetes cluster for local development. It'd be amazing if it became the default.

I'm not sure why the PersistentVolume is neeed, though. For "local" storage (e.g. vendor/bundle in Ruby projects) on the Docker host (minikube vm), I'm just using the /data mount.

@alanbrent

This comment has been minimized.

Copy link

commented Aug 28, 2017

By the way, this also "fixes" (obviates the need to fix) the data inconsistency problems with 9p as outlined in #1515.

@r2d4 r2d4 added the kind/feature label Aug 28, 2017

@nathanleclaire

This comment has been minimized.

Copy link

commented Aug 30, 2017

Cool! You might want to consider starting with extending mount to make it an option to mount an NFS share (then gradually work out the bugs) rather than starting by making it default. Mounting all of /Users is convenient, but kind of overkill, and NFS itself tends to be a bit finnicky. Not sure the current state of the 9p driver though.

@nathanleclaire

This comment has been minimized.

Copy link

commented Aug 30, 2017

BTW, Windows won't work with NFS (AFAIK), so you should also consider how you might want to implement things like CIFS / Samba sharing and/or rsync (which would be its own weird hairball, requiring a process monitoring the local filesystem on host side).

@eecue

This comment has been minimized.

Copy link
Author

commented Aug 30, 2017

@nathanleclaire I believe you can get NFS working with Cygwyn, but our windows dev is back tomorrow so we will experiment. And yeah sounds like a good plan, we actually don't need all of /Users mounted either, I'm going to narrow that down to just our dev directory.

@alanbrent

This comment has been minimized.

Copy link

commented Aug 30, 2017

In our use case, broadly mounting /Users is acceptable and even desirable. I haven’t yet encountered any issues with the NFS amount whatsoever, and it is significantly faster than 9p and vbox. In my local, not at all robust or sophisticated testing, the write performance penalty of each solution vs. native is the following:

  • vbox: 95%
  • xhyve+9p: 85%
  • xhyve+NFS: 67%
@nathanleclaire

This comment has been minimized.

Copy link

commented Aug 30, 2017

Yeah, no doubt NFS is the fastest of commonly available options. I've definitely seen it have some odd behavior though, so just tipping you off (like you noted, sometimes it also just chugs along without issue as well). There's also a few quirks in setting it up you might want to document -- I recall having to change a setting on my Mac to allow access to lower (more privileged) ports from the VM.

As far as the /Users thing goes, that's mostly a security caveat from me (and definitely personal/team preference). The main attack vector I'd be concerned with there is developers accidentally running a malicious container/image -- if /Users/user/.aws/credentials is present in the VM, it will be easier for attackers to access, etc.

@alanbrent

This comment has been minimized.

Copy link

commented Aug 31, 2017

@nathanleclaire Since we're having this conversation ... have you automated this solution at all? We're in the early stages of doing so, in order to move to a minikube-based local development workflow, and it's a little thorny re: best balance of "make this easy" and "people should know the tools they'll be using every day".

@nathanleclaire

This comment has been minimized.

Copy link

commented Aug 31, 2017

@alanbrent Automated setup of NFS? I haven't but usually to try it out I've used https://github.com/adlogix/docker-machine-nfs (this is what I was referring to about the ports IIRC). That shell script should be relatively translatable to Go code.

@eecue

This comment has been minimized.

Copy link
Author

commented Aug 31, 2017

@alanbrent I do, i have a shell script that automates the complete dev environment setup (on a mac, and more or less on a PC, would be easy to port to linux as well). it does the following:

  1. Install brew
  2. Install docker
  3. Install kubectl
  4. Install minikube
  5. Build the standard site structure that our local dev env uses
  6. Check out the repos for all of our services from gitlab
  7. Set up your k8s specific public key (and help you upload it to gitlab) which is used for composer/gulp
  8. Start and configure minikube
  9. Build all of the docker images needed
  10. Spin up the deployments for k8s
  11. Add hosts entries in /etc/hosts and NFS settings in /etc/exports, restart nfsd
  12. Install MySQL
  13. Configure MySQL to work with k8s
  14. Download and load Dev DB Dump
  15. Configure all the applications mentioned above with k8s friendly settings so everything should work without configuration

All is non destructive so if something happens midway through or you need to delete something and recreate it all works with that.

@eecue

This comment has been minimized.

Copy link
Author

commented Aug 31, 2017

@alanbrent @nathanleclaire no real need to automate:

echo "/Users -alldirs -mapall="$(id -u)":"$(id -g)" 192.168.99.100"| sudo tee -a /etc/exports sudo nfsd restart

@nathanleclaire

This comment has been minimized.

Copy link

commented Aug 31, 2017

Well, you've got to consider mount / export deletion. As well as things like what happens if the VM IP address changes. As well as elegantly handling the root privilege of doing so (e.g., Vagrant has a section on this ) and making sure the right commands are run for users of other Unixes. I run minikube on Linux for instance.

@aaron-prindle

This comment has been minimized.

Copy link
Contributor

commented Aug 31, 2017

This makes a lot of sense, we should definitely document how to get NFS working with minikube on each platform. We can also look at extending 'mount' in the future to make setting up the NFS server easier.

@eecue

This comment has been minimized.

Copy link
Author

commented Aug 31, 2017

On that note @aaron-prindle do you know of anyone that's gotten NFS working on Windows? We're struggling with it currently. Using Cygwin.

@georgecrawford

This comment has been minimized.

Copy link

commented Sep 7, 2017

Hi all.

This is the thread for me! I've been suffering from very slow disk i/o in my default /Users mount (xhyve on OS X), and also the issue at #1515. However, I'm a bit of a novice by comparison, so don't quite understand what I need to do to experiment with NFS.

@eecue is there any chance you could share your shell script that automates the complete dev environment setup, with any private info removed? This is basically exactly what I'm currently doing for my team, and building in NFS improvements would help a great deal.

@alanbrent I was also wondering why the PersistentMount is required, but didn't understand your comment:

I'm not sure why the PersistentVolume is neeed, though. For "local" storage (e.g. vendor/bundle in Ruby projects) on the Docker host (minikube vm), I'm just using the /data mount.
What is the /data mount? If you need to do something to set that up, what is it? Like you, I'm very happy if the whole of /Users is mounted into the VM.

Thanks for any help you can offer!

@Lookyan

This comment has been minimized.

Copy link

commented Sep 9, 2017

Yes. We've just moved from vbox+vboxfs to xhyve+NFS and it's really cool. CPU usage has decreased 4 times, response time the same. It would be really cool if it was automated in minikube and set to default.
I had some issues with xhyve installation with resources available only through VPN from host (I solved it using this script: https://gist.github.com/mowings/633a16372fb30ee652336c8417091222).
Another issue was about connecting from xhyve VM to host by static IP as in VBox (now I solved this problem only using VBox host-only adapters). Do you know any solution without using Vbox adapters? I need to have an access from VM by static IP.

@georgecrawford
For NFS on mac you should do following steps:
Set right settings in /etc/exports on mac. We use this: sudo sh -c 'echo "/Users -alldirs -mapall=0:0 $(minikube ip)" > /etc/exports'
Then sudo nfsd restart
And then add to your project NFS persistent volume (eecue gave an example).
Enjoy fast i/o.

@possibilities

This comment has been minimized.

Copy link

commented Dec 26, 2017

This is a good workaround but I think adding nfs as a requirement to use minikube is too much of a barrier. That said I suffer from the constant crashing as well. I wish I had a constructive advice but my hope is that the current experience remains the status quo and we don't end up with a much harder to use tool. Thanks for listening (:

@fejta-bot

This comment has been minimized.

Copy link

commented Mar 26, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

commented Apr 25, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@huggsboson

This comment has been minimized.

Copy link

commented May 2, 2018

docker-machine-nfs solves this really reliably for docker-machine. I wonder if it's possible to port it to minikube and run it after the fact as like minikube-nfs or something.

@huggsboson

This comment has been minimized.

Copy link

commented May 2, 2018

/remove-lifecycle rotten

@huggsboson

This comment has been minimized.

Copy link

commented May 2, 2018

Looks like someone already did it:
https://github.com/mstrzele/minikube-nfs

@huggsboson

This comment has been minimized.

Copy link

commented May 2, 2018

/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented May 2, 2018

@huggsboson: you can't close an active issue unless you authored it or you are assigned to it, Can only assign issues to org members and/or repo collaborators..

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pietervogelaar

This comment has been minimized.

Copy link

commented Jul 19, 2018

I create a blog post for setting up a Minikube NFS mount. This way containers can mount source code on the host machine with great performance! http://pietervogelaar.nl/minikube-nfs-mounts

@bhack

This comment has been minimized.

Copy link

commented Sep 28, 2018

Is also 9p giving problem with os.rename?

@antonmarin

This comment has been minimized.

Copy link

commented Dec 19, 2018

We use nfs too. But it have a thing. Nfs doesn't support inotify events, so nodejs watchers don't work :(

@tstromberg

This comment has been minimized.

Copy link
Contributor

commented Jan 24, 2019

I would love to see documentation suggesting that NFS as a more performant alternative for power users, but I hesitate to suggest it as the default unless the user experience is relatively painless. For instance, automatically editing /etc/exports files.

If we could make --nfs an option to mount, then I'd be more than happy to switch it as the default, and fall-back to 9p/vbox/etc if it isn't available for some reason.

@fejta-bot

This comment has been minimized.

Copy link

commented Apr 29, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@pietervogelaar

This comment has been minimized.

Copy link

commented Apr 29, 2019

Just using https://github.com/vapor-ware/ksync is also a great alternative.

@tstromberg tstromberg added the r/2019q2 label May 22, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.