Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow transfer speeds under OS X #11

Open
ThomasWaldmann opened this issue Mar 11, 2016 · 33 comments
Open

Slow transfer speeds under OS X #11

ThomasWaldmann opened this issue Mar 11, 2016 · 33 comments
Assignees
Labels
bug MacOS Issues affecting only MacOS

Comments

@ThomasWaldmann
Copy link

See there: borgbackup/borg#664

@Nikratio
Copy link
Contributor

That report is rather long and convoluted. If there's a way to create a simple(!) C program that reads (or writes) data from a file and is slow over sshfs but fast without it, that would much improve the changes of figuring out what happens here.

@Nikratio Nikratio added the bug label Mar 12, 2016
@ThomasWaldmann
Copy link
Author

borgbackup/borg#664 (comment)

not C code, but simple enough to try.

@Nikratio
Copy link
Contributor

I get the same speed for rsync and scp.

$ scp ebox:bigfile .
bigfile                                              100% 5120KB 640.0KB/s   00:08    
$ sshfs ebox: mnt/
$ rsync -ah --progress mnt/bigfile dst/
sending incremental file list
bigfile
          5.24M 100%  662.07kB/s    0:00:07 (xfr#1, to-chk=0/1)

@chebee7i
Copy link

@Nikratio is there some thing I can do on my end with sshfs to provide helpful diagnostic logging information?

@Nikratio
Copy link
Contributor

Trying to narrow down the problem (try different servers, and different client versions) is always a good first step. Providing ssh access to a server where you experience the slowdown (so that others can attempt to reproduce the issue) may also help. Finally, you could try running with strace -T -tt (worst), with profiling (-pgcompile option, better) or with oprofile (best). Ideally, you would gather two profiles: one where the problem appears, and one where it does not.

@ThisIsTheOnlyUsernameAvailable
Copy link

ThisIsTheOnlyUsernameAvailable commented Feb 6, 2017

I see this bug is still open but no progress since March last year. Has there been any progress behind the scenes?

I experience a similar issue. I've tried two versions of sshfs (2.5-3 and 2.8-1) on F25. It acts as if something is rate-limiting the transfer as the speed is almost identical (to the kb/s) between tests. I have proven that connectivity and encryption aren't an issue as transfers work perfectly well (ie ~6MB/s) when I:

  • SCP from source server to destination server
  • SCP from destination server to source server
  • SFTP from source server to destination server
  • SFTP from destination server to source server
  • Rsync over SSH from source server to destination server
  • Rsync over SSH from destination server to source server

Now things get interesting. If I use SSHFS and download via rsync a file from the source server to the destination server (ie rsync -avhr /mnt/sshfs-MountPoint/file Destination directory), I get ~940kbs. However, if I reverse the direction (rsync -avhr source file /mnt/sshfs-MountPoint/sshfs-Destination/) it works perfectly.

There was someone else who seems to have experienced the same issue, however their thread on Ubuntu forums has closed. https://askubuntu.com/questions/879415/sshfs-speeds-upload-fast-download-is-slow

I'm using F25 with Fuse 2.9.7-1, kernel 4.9.6-200.

Strace output is as follows:

Full transfer speed: https://www.dropbox.com/s/z0nybg7803p8832/fasttransfer.txt?dl=0
Slow transfer speed: https://www.dropbox.com/s/9jjwho1179uwvqi/slowtransfer.txt?dl=0

Please let me know what other information you need.

@Nikratio
Copy link
Contributor

Nikratio commented Feb 6, 2017

Well, I think my message from Mar 15 (right above yours) should answer your question about what information is needed. That said, I unfortunately have very little time for sshfs at the moment so even with that information it may take a while.

@ThisIsTheOnlyUsernameAvailable

Hi,

I believe I have supplied the information requested in your post on Mar 15. ie:

  • Multiple clients and servers
  • Multiple versions of sshfs
  • Two strace profiles (one where I have the problem and one where I don't)

I did not offer SSH access as this is a public forum but if anyone would like, they're welcome to contact me directly.

@boba23
Copy link

boba23 commented Feb 23, 2017

Hey,

wanted to follow up on this issue, since I am also experiencing speed issues. Especially when I compare speeds achieved using native sftp and sshfs. Here's my environment:

Local client on 200MBit/sec cable link

$ sshfs --version
SSHFS version 2.5
FUSE library version: 2.9.4
fusermount version: 2.9.4
using FUSE kernel interface version 7.19
Ubuntu Server 16.04.2, Linux 4.4.0-62-generic
OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016

Remote Server on university 1 Gigabit Link:

Debian 9 Server, Linux 4.3.0-1-amd64 #1 SMP Debian 4.3.5-1 (2016-02-06)
OpenSSH_7.4p1 Debian-6, OpenSSL 1.0.2k 26 Jan 2017

I got a directory mounted from the server on my client using:

sshfs root@server:/home/example -p 13448 -o allow_other,ro,kernel_cache,cache=yes,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,sshfs_sync,no_readahead -o uid=1000 -o gid=1000

Now testing shows, transferring the very same ubuntu iso test file:

sftp directly: up to 23MByte/sec -> almost my full 200MBit downlink bandwidth
sshfs copy: max 8,5 MByte/sec -> less than half of native sftp speed

I repeated the tests many times so they should be valid. Of course there may be situations when bandwidth varies on both sides due to other factors, but still. I never get more than 8,5 MByte/sec using sshfs and copying the test file.

So is this a confirmed bug, or might there be other options that improve performance?

As shown above I am on the ubuntu default package v2.5 of sshfs. Would a self compiled latest version help?

thanks

boba

@ibaun
Copy link

ibaun commented Mar 28, 2017

Testing different installs I've found that the problem does not occur anymore if I downgrade osx installs to version SSHFS 2.5 + FUSE 2.7.3 / OSXFUSE 2.8.0 (previously at FUSE 3.5.5, also see it at 3.5.6).

@Nikratio
Copy link
Contributor

Nikratio commented Jun 5, 2017

Could you to explicitly enable/disable the nodelay workaround (-o workaround=[no]nodelaysrv,[no]nodelay, 4 combinations) and report if that affects performance? You may have to recompile with --enable-sshnodelay)

@ibaun
Copy link

ibaun commented Jul 6, 2017

Got sshfs 2.9 from release
$ ./sshfs --version
SSHFS version 2.9
OSXFUSE 3.6.0
FUSE library version: 2.9.7
fuse: no mount point

compiled with ./configure --enable-sshnodelay and tested all 4 options by each time mounting the remote disk and rsync --progress copying a file from remote to local. It remains at a speed about 8 times lower than when I do a straight rsync over ssh (±10 MB/s vs ±80 MB/s)

@Nikratio
Copy link
Contributor

Nikratio commented Jul 6, 2017

Thanks! What speed do you get with scp?

@ibaun
Copy link

ibaun commented Jul 6, 2017

About 100-110 MB/s

@Nikratio
Copy link
Contributor

Nikratio commented Jul 6, 2017

It seems that for @boba23 downgrading to sshfs 2.5 did not fix the issue, yet for @ibaun downgrading to sshfs 2.5, FUSE 2.7.3, and OSXFUSE 2.8.0 did fix the issue.

@boba23: could you please try to downgrade FUSE/OSXFUSE as well, and see if that fixes the problem?

@ibaun: could you please try to downgrade only sshfs, and see if the problem still occurs?

@ThomasWaldmann
Copy link
Author

Besides the different versions of the code used, guess people should also tell the latency of the network link used. Things tend to get slow if there are frequent roundtrips over a (relatively) high latency link.

Also, if network throughput is high and latency is high, tcp/ip might need some tuning for optimal performance.

@ibaun
Copy link

ibaun commented Jul 6, 2017

I downgraded sshfs to 2.5 using a build from source, but can't seem to get a connection to the server with that version.

$ ./sshfs --version
SSHFS version 2.5
OSXFUSE 3.6.0
FUSE library version: 2.9.7
fuse: no mount point

Mount is successful, but 'ls' on the mount point hangs indefinitely.

(well, not a system wide downgrade, but running from the source dir, does that make a difference?)

@ThomasWaldmann this is on a local link, only a local switch in between. If ping is a valid way to measure the latency:
$ ping 192.168.0.170
PING 192.168.0.170 (192.168.0.170): 56 data bytes
64 bytes from 192.168.0.170: icmp_seq=0 ttl=64 time=0.446 ms
64 bytes from 192.168.0.170: icmp_seq=1 ttl=64 time=0.227 ms
64 bytes from 192.168.0.170: icmp_seq=2 ttl=64 time=0.231 ms
64 bytes from 192.168.0.170: icmp_seq=3 ttl=64 time=0.180 ms

@Nikratio
Copy link
Contributor

Nikratio commented Jul 6, 2017

Grmbl. Ok, can you try to stay at sshfs 2.9 but downgrade oxfuse to 2.8.0 (this should also change the reported FUSE library version)?

@ibaun
Copy link

ibaun commented Jul 7, 2017

The probleem seems to be caused by the osxfuse version. I've downgraded sshfs to 2.5 successfully using a precompiled binary and that didn't help. The only thing that helped was indeed putting osxfuse back to 2.8.0.

It seems that the slowdown is caused between osxfuse 2.8.3 and 3.0.4. 2.8.1 seems to be the fastest, at 2.8.2 the speed is a little lower already, at 3.0.4 the speed drops completely.

To recap:

  • scp speed : 110 MB/s
  • rsync from sshfs 2.9.0 + osxfuse 2.8.1: ± 55 MB/s
  • rsync from sshfs 2.9.0 + osxfuse 2.8.2: ± 50 MB/s
  • rsync from sshfs 2.9.0 + osxfuse 2.8.3: ± 50 MB/s
  • rsync from sshfs 2.9.0 + osxfuse 3.0.4: ± 10 MB/s

@Nikratio
Copy link
Contributor

Nikratio commented Jul 7, 2017

Alright, that's progress. So although the symptoms are similar, it seems there are two different problems at play here. @ibaun, could you report your problem at https://github.com/osxfuse/osxfuse/issues? @boba23: you are using Linux, right? So we need to do some further debugging. As a first step, could you confirm that the problem is still present in the most recent sshfs?

@ibaun
Copy link

ibaun commented Jul 10, 2017

So according to osxfuse/osxfuse#389 maybe the sshfs st_blksize should be configurable? I recompiled sshfs 2.9 with sshfs.blksize = 4096*1024; instead of 4096, and I'm at 55 MB/s using osxfuse 3.4.0. Setting sshfs.blksize = 0 in the code and using -o iosize=8388608 brings me back to ± 105 MB/s, which is fantastic. Not sure what other effects this iosize would have though.

@coderforlife
Copy link

coderforlife commented Jul 31, 2017

This fix seems to be at least partly done in the source tree (https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L1900 and https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3112) which check for __APPLE__ and set the blksize of the stat_t object to 0. Those improve performance quite a bit (from ~300 KB/s to 2.5 MB/s) but with one more change it can be doubled again (to 5.25 MB/s + which is just about what I get with sftp directly):

At https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3865 change:

sshfs.blksize = 4096;

To:

#ifdef __APPLE__
    sshfs.blksize = 0;
#else
    sshfs.blksize = 4096;
#endif

(or combine with one of the other __APPLE__ checks).

@Nikratio Nikratio changed the title sshfs slowness? Slow transfer speeds under OS X Jul 31, 2017
@Nikratio
Copy link
Contributor

Nikratio commented Aug 3, 2017

Benjamin, could you take a look at this (OS-X specific) proposed change? It seems to increase performance a lot. Thanks!

@Nikratio
Copy link
Contributor

@bfleischer ping. Could you take a look?

@Nikratio
Copy link
Contributor

Nikratio commented Oct 1, 2017

As discussed in issue #84, when this is implemented care needs to be taken to avoid a division by zero in sshfs_statfs().

@vszakats
Copy link
Contributor

vszakats commented Oct 25, 2017

I can confirm that this patch (applied to https://github.com/libfuse/sshfs/releases/download/sshfs-2.10/sshfs-2.10.tar.gz as distributed via Homebrew):

diff --git a/sshfs.c b/sshfs.c
index fb566f4..d1c1fb2 100644
--- a/sshfs.c
+++ b/sshfs.c
@@ -3862,7 +3862,11 @@
 	}
 #endif /* __APPLE__ */
 
-	sshfs.blksize = 4096;
+#ifdef __APPLE__
+	sshfs.blksize = 0;
+#else
+	sshfs.blksize = 4096;
+#endif
 	/* SFTP spec says all servers should allow at least 32k I/O */
 	sshfs.max_read = 32768;
 	sshfs.max_write = 32768;

with this sshfs option:

-o iosize=1048576

fixes the speed issue.

Instead of a zero default for macOS, it'd be nice to set it to a sane speedy value and let the iosize option work at all times IMO. This would make it work as expected by default and would still let the value be fine-tuned at runtime.

UPDATE: 1048576 (1 MiB) seemed to be giving the best throughput in my tests. (on a 802.11n 2.4Ghz WLAN with a single 1.25 GiB file, 13 minutes (before) vs. 2 minutes (after) transfer times. Server was Debian ARM.)

$ sshfs -V
SSHFS version 2.10
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point

This means that db149d1 didn't resolve the issue for all cases. (Haven't tested the osxfuse version of the patch though — it has one extra hunk: osxfuse/sshfs@5c0dbfe)

@niharG8
Copy link

niharG8 commented Oct 31, 2017

@vszakats I have been experiencing similar throughput issues on multiple macOs Sierra machines connecting to a local SSH server over 2.4 GHZ WLAN. I routinely get 6 MB/s+ using scp, but rsync over the local SSHFS mount was giving no more than 2.2 MB/s on any machine. I applied the patch you referenced and immediately saw the SSHFS speeds matching SCP, even without setting -o iosize.

This is also with:

$ sshfs -V
SSHFS version 2.10
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point

niharG8 added a commit to niharG8/sshfs that referenced this issue Oct 31, 2017
@axeII
Copy link

axeII commented Apr 13, 2018

@vszakats @niharG8 Help... patch did not work for me... still slow transfer speed (400KB/s instead of 2MB/s)

$ sshfs -V
SSHFS version 2.10.0
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point

@Nikratio
Copy link
Contributor

@bfleischer ping. Could you take a look?

@ccope
Copy link
Contributor

ccope commented Apr 25, 2018

I am on Ubuntu 16.04 connecting to a Debian Stretch box across the Atlantic.

$ uname -r
4.13.0-38-generic
# the remote host is 4.15.11
$ ping remotehost
...time=156 ms
$ sshfs -V
SSHFS version 3.3.1
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
fusermount3 version: 3.2.1
# the test file is 38MB of uncompressable data. larger files have the same performance
# EDIT: with sshfs. larger files perform even better with libssh2: 7.5x faster than sshfs!
$ time cp /mnt/sshfs/test /tmpfs/test
real    0m21.118s
user    0m0.010s
sys     0m0.096s
$ time scp remotehost:test /tmpfs/test
real    0m8.175s
user    0m0.310s
sys     0m0.456s
$ time sftp_using_libssh2.py remotehost:test # using the ssh2-python module
Finished file read in 0:00:04.816173

It seems like linking against libssh2 might be a good solution?

@esilvaju
Copy link

Any news about this issue ?

Nikratio pushed a commit that referenced this issue Sep 12, 2019
#185)

Following-up on [1], there was another instance where blksize
was set to a non-zero value, thus making it impossible to
configure global I/O size on macOS, and using [2] the hard-wired
value of 4096 bytes instead, resulting in uniformly poor
performance [3].

With this patch, setting I/O size to a reasonable large value,
will result in much improved performance, e.g.:
  -o iosize=1048576

[1] osxfuse/sshfs@5c0dbfe
[2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812
[3] #11 (comment)
fxcoudert pushed a commit to Homebrew/homebrew-core that referenced this issue Sep 13, 2019
Following-up on [1], there was another instance where blksize
was set to a non-zero value, thus making it impossible to
configure global I/O size on macOS, and using [2] the hard-wired
value of 4096 bytes instead, resulting in uniformly poor
performance [3].

With this patch, setting I/O size [4] to a reasonable large
value, will result in much improved performance, e.g.:
  -o iosize=1048576

[1] osxfuse/sshfs@5c0dbfe
[2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812
[3] libfuse/sshfs#11 (comment)
[4] https://github.com/osxfuse/osxfuse/wiki/Mount-options#iosize

Closes #44173.

Signed-off-by: FX Coudert <fxcoudert@gmail.com>
@Optiligence
Copy link

This is not only a problem on macOS and also not an insulated issue.
In fact, in my experience, the only time where copying a large file can sustain very high network throughput is when it is over a very low latency connection.
400 Mbps connection with 20 ms ping → 10 MiB/s max
Is there anything that can be tweaked on other OS’s? (The iosize thing seems to be an osxfuse-only option)

@taldcroft
Copy link

This is still a problem, any updates? I also see a factor of 3 to 4 slowdown on MacOSX 10.15 with sshfs vs rsync when transferring large files.

(ska3) ➜  repro5_issues sshfs -V
SSHFS version 2.5 (OSXFUSE SSHFS 2.5.0)
OSXFUSE 3.10.4
FUSE library version: 2.9.7
fuse: no mount point

mainka pushed a commit to mainka/macports-ports that referenced this issue Jun 23, 2021
* This commit backports libfuse/sshfs commit 667cf34

Original commit message:

    sshfs: fix another instance preventing use of global I/O size on
    macOS (macports#185)

    Following-up on [1], there was another instance where blksize was
    set to a non-zero value, thus making it impossible to configure
    global I/O size on macOS, and using [2] the hard-wired value of
    4096 bytes instead, resulting in uniformly poor performance [3].

    With this patch, setting I/O size to a reasonable large value,
    will result in much improved performance, e.g.:
      -o iosize=1048576

    [1] osxfuse/sshfs@5c0dbfe
    [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812
    [3] libfuse/sshfs#11 (comment)
drkp pushed a commit to macports/macports-ports that referenced this issue Jun 26, 2021
* This commit backports libfuse/sshfs commit 667cf34

Original commit message:

    sshfs: fix another instance preventing use of global I/O size on
    macOS (#185)

    Following-up on [1], there was another instance where blksize was
    set to a non-zero value, thus making it impossible to configure
    global I/O size on macOS, and using [2] the hard-wired value of
    4096 bytes instead, resulting in uniformly poor performance [3].

    With this patch, setting I/O size to a reasonable large value,
    will result in much improved performance, e.g.:
      -o iosize=1048576

    [1] osxfuse/sshfs@5c0dbfe
    [2] https://github.com/libfuse/sshfs/blob/4c21d696e9d46bebae0a936e2aec72326c5954ea/sshfs.c#L812
    [3] libfuse/sshfs#11 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug MacOS Issues affecting only MacOS
Projects
None yet
Development

No branches or pull requests