Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFS over UDP causes problems #2304

Closed
BreiteSeite opened this issue Oct 2, 2013 · 5 comments
Closed

NFS over UDP causes problems #2304

BreiteSeite opened this issue Oct 2, 2013 · 5 comments
Milestone

Comments

@BreiteSeite
Copy link

Environment:

  • Host: Debian 6.0.7 (3.2.0-0.bpo.3-amd64)
  • Vagrant 1.3.3
  • Guest: Debian 7.1 (3.2.0-4-amd64)

NFS over UDP is causing a stalling effect. This happens when we unpack Zend Framework 2 within the Guest onto the NFS-Mount.

While i agree packet loss isn't an issue on the local link, fragmentation is. This is documentated in the Manpage of NFS(5) in Debian Wheezy under the section "TRANSFER METHODS" sub-section "Using NFS over UDP on high-speed links".

They strongly recommend using TCP instead of UDP because it has much less issues:

One (in the manpage documented) workaround is to enable jumbo frames by setting the MTU to 9000 of the interface which NFS is using. While this solved our initial problem (stalling of file transfers) it's bringing some more headaches.

  1. We can't roll out this MTU setting in the base boxes, because the adapter for NFS is created via the Vagrantfile
  2. The configuration of the interface (or better it's MTU size) in puppet is slightly tricky, because we don't know which interface is used for NFS.
  3. Even if we - somehow- manage to configure the MTU in puppet/packer, the issue still exists because the fixes just reduces the probability of the issue and doesn't fix them.

So i recommend and request to revert c0404e3 (#1706) and using TCP as default. The NFS documentation itself recommend it and it greatly improves reliability.

@mitchellh
Copy link
Contributor

Yes, I saw in the docs that UDP was not recommended. But for local transfer I think preferring speed is ideal here. I think documentation plus giving an option to disable the UDP transfer would be best. I've marked this as an enhancement to try to get into Vagrant 1.4.

Thanks!

@BreiteSeite
Copy link
Author

Hey mitchellh, thanks for the positive reply.

But i really hope you change your mind to prefer stability over performance. Also, are there any benchmarks regarding the performance gain in UDP? I mean, we figured out without jumbo frames (which i believe is the default in every distribution) you really easily run into "bad checksums" errors which can really fast destroy your performance benefit and even make it worse performing than with TCP (because of the dumb retransmitting of NFS).

Also while i think Vagrant is a great product, i really noticed the quality assurance is... well... not optimal. I tried to rollout some of the latest versions of vagrant in our company but most of the time i tried a new release it had a bug which are most of the time were a showstopper. IMHO this release qualifies again.

What i'm trying to say is that debugging this sort of nasty bugs can really drive up the cost of "using vagrant" in a company which - i think - we (you) should avoid as hard as we (you) can. I mean that a sudden froze of any operation into your mount because you write a 1536 byte file is... leading up to unnecessary debugging-sessions and frustrated coworkers.

Also if we know that this is so unreliable that we have to document it, why can't we (you) make it a sane default to TCP which we know is reliable and greatly recommend from various documentations.

Hope you are not offended cause that's not my intention. I think making this as an option is really the right way, just the default (to UDP) is questionable.

Thank you very much for your time.

@cromulus
Copy link

slightly OT, but @BreiteSeite would you mind sharing how you setup the jumbo frames for the NFS interfaces? I'm interested, largely because NFS performance is the key element for me using vagrant.

Additionally, when you use jumbo frames with virtualbox, you must ensure that you've got the right chipset and NIC.

"VirtualBox also has limited support for so-called jumbo frames, i.e. networking packets with more than 1500 bytes of data, provided that you use the Intel card virtualization and bridged networking. In other words, jumbo frames are not supported with the AMD networking devices; in those cases, jumbo packets will silently be dropped for both the transmit and the receive direction. Guest operating systems trying to use this feature will observe this as a packet loss, which may lead to unexpected application behavior in the guest. This does not cause problems with guest operating systems in their default configuration, as jumbo frames need to be explicitly enabled"

http://www.virtualbox.org/manual/ch06.html

@JasonGiedymin
Copy link

I would try another vagrant provisioner like vmware or lxc. I experience a high rate of issues with virtualbox.

@mitchellh
Copy link
Contributor

Fixed, added the option nfs_udp along with docs.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants