-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow transfer speeds under OS X #11
Comments
That report is rather long and convoluted. If there's a way to create a simple(!) C program that reads (or writes) data from a file and is slow over sshfs but fast without it, that would much improve the changes of figuring out what happens here. |
not C code, but simple enough to try. |
I get the same speed for rsync and scp.
|
@Nikratio is there some thing I can do on my end with sshfs to provide helpful diagnostic logging information? |
Trying to narrow down the problem (try different servers, and different client versions) is always a good first step. Providing ssh access to a server where you experience the slowdown (so that others can attempt to reproduce the issue) may also help. Finally, you could try running with |
I see this bug is still open but no progress since March last year. Has there been any progress behind the scenes? I experience a similar issue. I've tried two versions of sshfs (2.5-3 and 2.8-1) on F25. It acts as if something is rate-limiting the transfer as the speed is almost identical (to the kb/s) between tests. I have proven that connectivity and encryption aren't an issue as transfers work perfectly well (ie ~6MB/s) when I:
Now things get interesting. If I use SSHFS and download via rsync a file from the source server to the destination server (ie rsync -avhr /mnt/sshfs-MountPoint/file Destination directory), I get ~940kbs. However, if I reverse the direction (rsync -avhr source file /mnt/sshfs-MountPoint/sshfs-Destination/) it works perfectly. There was someone else who seems to have experienced the same issue, however their thread on Ubuntu forums has closed. https://askubuntu.com/questions/879415/sshfs-speeds-upload-fast-download-is-slow I'm using F25 with Fuse 2.9.7-1, kernel 4.9.6-200. Strace output is as follows: Full transfer speed: https://www.dropbox.com/s/z0nybg7803p8832/fasttransfer.txt?dl=0 Please let me know what other information you need. |
Well, I think my message from Mar 15 (right above yours) should answer your question about what information is needed. That said, I unfortunately have very little time for sshfs at the moment so even with that information it may take a while. |
Hi, I believe I have supplied the information requested in your post on Mar 15. ie:
I did not offer SSH access as this is a public forum but if anyone would like, they're welcome to contact me directly. |
Hey, wanted to follow up on this issue, since I am also experiencing speed issues. Especially when I compare speeds achieved using native sftp and sshfs. Here's my environment: Local client on 200MBit/sec cable link$ sshfs --version Remote Server on university 1 Gigabit Link:Debian 9 Server, Linux 4.3.0-1-amd64 #1 SMP Debian 4.3.5-1 (2016-02-06) I got a directory mounted from the server on my client using: sshfs root@server:/home/example -p 13448 -o allow_other,ro,kernel_cache,cache=yes,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,sshfs_sync,no_readahead -o uid=1000 -o gid=1000 Now testing shows, transferring the very same ubuntu iso test file: sftp directly: up to 23MByte/sec -> almost my full 200MBit downlink bandwidth I repeated the tests many times so they should be valid. Of course there may be situations when bandwidth varies on both sides due to other factors, but still. I never get more than 8,5 MByte/sec using sshfs and copying the test file. So is this a confirmed bug, or might there be other options that improve performance? As shown above I am on the ubuntu default package v2.5 of sshfs. Would a self compiled latest version help? thanks boba |
Testing different installs I've found that the problem does not occur anymore if I downgrade osx installs to version SSHFS 2.5 + FUSE 2.7.3 / OSXFUSE 2.8.0 (previously at FUSE 3.5.5, also see it at 3.5.6). |
Could you to explicitly enable/disable the nodelay workaround ( |
Got sshfs 2.9 from release compiled with ./configure --enable-sshnodelay and tested all 4 options by each time mounting the remote disk and rsync --progress copying a file from remote to local. It remains at a speed about 8 times lower than when I do a straight rsync over ssh (±10 MB/s vs ±80 MB/s) |
Thanks! What speed do you get with scp? |
About 100-110 MB/s |
It seems that for @boba23 downgrading to sshfs 2.5 did not fix the issue, yet for @ibaun downgrading to sshfs 2.5, FUSE 2.7.3, and OSXFUSE 2.8.0 did fix the issue. @boba23: could you please try to downgrade FUSE/OSXFUSE as well, and see if that fixes the problem? @ibaun: could you please try to downgrade only sshfs, and see if the problem still occurs? |
Besides the different versions of the code used, guess people should also tell the latency of the network link used. Things tend to get slow if there are frequent roundtrips over a (relatively) high latency link. Also, if network throughput is high and latency is high, tcp/ip might need some tuning for optimal performance. |
I downgraded sshfs to 2.5 using a build from source, but can't seem to get a connection to the server with that version. $ ./sshfs --version Mount is successful, but 'ls' on the mount point hangs indefinitely. (well, not a system wide downgrade, but running from the source dir, does that make a difference?) @ThomasWaldmann this is on a local link, only a local switch in between. If ping is a valid way to measure the latency: |
Grmbl. Ok, can you try to stay at sshfs 2.9 but downgrade oxfuse to 2.8.0 (this should also change the reported FUSE library version)? |
The probleem seems to be caused by the osxfuse version. I've downgraded sshfs to 2.5 successfully using a precompiled binary and that didn't help. The only thing that helped was indeed putting osxfuse back to 2.8.0. It seems that the slowdown is caused between osxfuse 2.8.3 and 3.0.4. 2.8.1 seems to be the fastest, at 2.8.2 the speed is a little lower already, at 3.0.4 the speed drops completely. To recap:
|
Alright, that's progress. So although the symptoms are similar, it seems there are two different problems at play here. @ibaun, could you report your problem at https://github.com/osxfuse/osxfuse/issues? @boba23: you are using Linux, right? So we need to do some further debugging. As a first step, could you confirm that the problem is still present in the most recent sshfs? |
So according to osxfuse/osxfuse#389 maybe the sshfs |
This fix seems to be at least partly done in the source tree (https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L1900 and https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3112) which check for At https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3865 change:
To:
(or combine with one of the other |
Benjamin, could you take a look at this (OS-X specific) proposed change? It seems to increase performance a lot. Thanks! |
@bfleischer ping. Could you take a look? |
As discussed in issue #84, when this is implemented care needs to be taken to avoid a division by zero in sshfs_statfs(). |
I can confirm that this patch (applied to https://github.com/libfuse/sshfs/releases/download/sshfs-2.10/sshfs-2.10.tar.gz as distributed via Homebrew): diff --git a/sshfs.c b/sshfs.c
index fb566f4..d1c1fb2 100644
--- a/sshfs.c
+++ b/sshfs.c
@@ -3862,7 +3862,11 @@
}
#endif /* __APPLE__ */
- sshfs.blksize = 4096;
+#ifdef __APPLE__
+ sshfs.blksize = 0;
+#else
+ sshfs.blksize = 4096;
+#endif
/* SFTP spec says all servers should allow at least 32k I/O */
sshfs.max_read = 32768;
sshfs.max_write = 32768; with this
fixes the speed issue. Instead of a zero default for macOS, it'd be nice to set it to a sane speedy value and let the UPDATE: 1048576 (1 MiB) seemed to be giving the best throughput in my tests. (on a 802.11n 2.4Ghz WLAN with a single 1.25 GiB file, 13 minutes (before) vs. 2 minutes (after) transfer times. Server was Debian ARM.)
This means that db149d1 didn't resolve the issue for all cases. (Haven't tested the osxfuse version of the patch though — it has one extra hunk: osxfuse/sshfs@5c0dbfe) |
@vszakats I have been experiencing similar throughput issues on multiple macOs Sierra machines connecting to a local SSH server over 2.4 GHZ WLAN. I routinely get 6 MB/s+ using scp, but rsync over the local SSHFS mount was giving no more than 2.2 MB/s on any machine. I applied the patch you referenced and immediately saw the SSHFS speeds matching SCP, even without setting This is also with:
|
@bfleischer ping. Could you take a look? |
I am on Ubuntu 16.04 connecting to a Debian Stretch box across the Atlantic.
It seems like linking against libssh2 might be a good solution? |
Any news about this issue ? |
#185) Following-up on [1], there was another instance where blksize was set to a non-zero value, thus making it impossible to configure global I/O size on macOS, and using [2] the hard-wired value of 4096 bytes instead, resulting in uniformly poor performance [3]. With this patch, setting I/O size to a reasonable large value, will result in much improved performance, e.g.: -o iosize=1048576 [1] osxfuse/sshfs@5c0dbfe [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812 [3] #11 (comment)
Following-up on [1], there was another instance where blksize was set to a non-zero value, thus making it impossible to configure global I/O size on macOS, and using [2] the hard-wired value of 4096 bytes instead, resulting in uniformly poor performance [3]. With this patch, setting I/O size [4] to a reasonable large value, will result in much improved performance, e.g.: -o iosize=1048576 [1] osxfuse/sshfs@5c0dbfe [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812 [3] libfuse/sshfs#11 (comment) [4] https://github.com/osxfuse/osxfuse/wiki/Mount-options#iosize Closes #44173. Signed-off-by: FX Coudert <fxcoudert@gmail.com>
This is not only a problem on macOS and also not an insulated issue. |
This is still a problem, any updates? I also see a factor of 3 to 4 slowdown on MacOSX 10.15 with sshfs vs rsync when transferring large files.
|
* This commit backports libfuse/sshfs commit 667cf34 Original commit message: sshfs: fix another instance preventing use of global I/O size on macOS (macports#185) Following-up on [1], there was another instance where blksize was set to a non-zero value, thus making it impossible to configure global I/O size on macOS, and using [2] the hard-wired value of 4096 bytes instead, resulting in uniformly poor performance [3]. With this patch, setting I/O size to a reasonable large value, will result in much improved performance, e.g.: -o iosize=1048576 [1] osxfuse/sshfs@5c0dbfe [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812 [3] libfuse/sshfs#11 (comment)
* This commit backports libfuse/sshfs commit 667cf34 Original commit message: sshfs: fix another instance preventing use of global I/O size on macOS (#185) Following-up on [1], there was another instance where blksize was set to a non-zero value, thus making it impossible to configure global I/O size on macOS, and using [2] the hard-wired value of 4096 bytes instead, resulting in uniformly poor performance [3]. With this patch, setting I/O size to a reasonable large value, will result in much improved performance, e.g.: -o iosize=1048576 [1] osxfuse/sshfs@5c0dbfe [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812 [3] libfuse/sshfs#11 (comment)
See there: borgbackup/borg#664
The text was updated successfully, but these errors were encountered: