Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve transfer speed #2111

Closed
sivachandran opened this issue Dec 21, 2015 · 17 comments
Closed

Improve transfer speed #2111

sivachandran opened this issue Dec 21, 2015 · 17 comments
Labels
topic/perf Performance

Comments

@sivachandran
Copy link
Contributor

I've been experimenting with go-ipfs and found that ipfs transfer is slow and it is not scaling up when the same content is available in multiple peers.

I've performed the tests on AWS EC2 instances within same region. I've used m4.large type EC2 instances which has 8GB, 2 vCPUs and moderate network bandwidth. I was running Debian Jessie on these instances. iPerf tool reported consistent 545Mbps TCP bandwidth between the instances. The IPFS version used was 0.3.11-dev.

Copying 100MB and 1G file through scp took 1.5secs and 17secs respectively. Whereas the same took 16secs and 170secs in IPFS. The numbers are average of multiple times I performed the tests. The command used to measure the transfer speed is "time ipfs cat | sha1sum". Piping the output to sha1sum is just to make sure I am getting the correct content, hashing took less than 500ms when performed separately or when the data is already present in the IPFS locally. Also there was no improvement in transfer speed if I replicate the files in more than one source peers.

You can find the network packet capture of 100MB file transfer here https://dl.dropboxusercontent.com/u/4925384/ipfs-ec2-100m.pcap.zip. Let me know if you need any other information

@daviddias daviddias added the topic/perf Performance label Jan 2, 2016
@Ape
Copy link

Ape commented Jan 10, 2016

I also did some transfer speed benchmarks between my two servers in different data centers using lighttpd + wget as a reference. The connection was warmed up by downloading other files with IPFS first.

Downloading a 373M binary file:

  • wget: 12.6 MB/s
  • ipfs get (direction A): 9.10 MB/s
  • ipfs get (direction B): 2.07 MB/s

I think there should be room for improving the speed since the peers are directly connected to each other.

EDIT: This might not be directly related to transfer speed, but I also did some latency benchmarks. I used ipfs cat with very small and unique files that I added on the other server. Also, in this case the IPFS connection was "warmed up" with other files.

  • ping: 33.018 ms
  • ipfs ping: 32.93 ms
  • ipfs cat (direction A): 151ms
  • ipfs cat (direction B): 443 ms

The latency is quite good, but there was this weird asymmetry between the directions. The asymmetry seems to be very consistent and repeatable between these two peers.

EDIT2: I also did the file transfer speed benchmark to the other direction and it was much better (9.10 MB/s)! Both peers are running go-ipfs 0.3.10.

@jbenet
Copy link
Member

jbenet commented Jan 10, 2016

These are great to have! If you could contribute some scripts to rerun benchmarks (ideally by plugging in hostnames/keys as variables), it would really help us improve the perf.

And yes, the transfers right now are pretty bad, comparatively, due to silly systems inefficiencies all over the place. From here there's tons of optimization to get to the speed of the link itself.

@jbenet
Copy link
Member

jbenet commented Jan 10, 2016

cc @whyrusleeping

@Ape
Copy link

Ape commented Jan 10, 2016

I did some additional testing and the asymmetry between IPFS directions seems to be consistently repeatable, while wget is consistently just as good (and always better than IPFS) in both directions.

@jbenet
Copy link
Member

jbenet commented Jan 10, 2016

@Ape could you try these also with dev0.4.0 and utp? (im curious, i expect it to be worse per-case and better amortized over many conns, but it may be better)

@Ape
Copy link

Ape commented Jan 10, 2016

Here are the results for 0.4.0-dev (commit 1c1f9c6):

  • latency (direction A): 85.5 ms ± 6.45 ms
  • latency (direction B): 121 ms ± 8.89 ms
  • throughput (direction A): 6.05 MB/s ± 3.31 MB/s (best 11.3 MB/s)
  • throughput (direction B): 4.41 MB/s ± 0.807 MB/s (best 5.33 MB/s)

0.4.0 seems to be better especially with latency. As you expected, there was lots of variation in transfers speeds now. This is the same test as I posted earlier, and the network conditions are the same.

ipfs id says that my addresses are /ip4/*/tcp/* and /ip6/*/tcp/*. Should it be utp here? Or do I have to enable it manually?

There is still the same asymmetry visible. The network connection should be symmetric and wget results prove it so, but the host computers are not equal. One machine has significantly faster CPU, more RAM and an SSD disk. Direction A is so that the slow machine sends and fast machine downloads, and direction B is the opposite. I will make more benchmarks with both machines being very fast if I can.

@Kubuxu
Copy link
Member

Kubuxu commented Jan 10, 2016

You have to probably change it in .ipfs/config under SwarmAddresses, from .../tcp/... to .../utp/....
Or have both.

That is my guess.

@Ape
Copy link

Ape commented Jan 10, 2016

Okay, some more tests. This time I changed the slower host for a bit faster one. I think the throughput results were previously CPU-bound (at least in direction B). They still might be, but not so badly.

For some unknown reasons the latency results aren't as good as before. The underlying network latency should be very similar to the previous tests. Ping gives me 32.8 ms this time.

  • latency (direction A): 311 ms ± 13.6 ms
  • latency (direction B): 194 ms ± 32.2 ms
  • throughput (direction A): 11.5 MB/s ± 0.283 MB/s (best 11.7 MB/s)
  • throughput (direction B): 7.60 MB/s ± 1.31 MB/s (best 9.10 MB/s)

Some conclusions about all my tests:

  • I cannot really show that 0.4.0-dev is better or worse than 0.3.10.
    • 0.4.0 might be a bit less CPU or disk bound.
  • You can expect around 100 - 500 ms latency on a 100 Mbit network with ~30 ms RTT.
  • It's possible to achieve >11 MB/s throughput on a 100 Mbit network.
    • But CPU usage or some other reasons may bring it down to even 2 MB/s on some cases.

@Ape
Copy link

Ape commented Jan 10, 2016

One more test result. I spawned two IPFS 0.4.0-dev instances on one computer (the one that was the better computer in the previous tests). Then I directly connected to the other instance using localhost tcp connection. In this scenario I was able to achieve 35.5 MB/s ± 2.4 MB/s throughput. So my better CPU (i5-4670K) can handle at least that. (But this wouldn't be enough for a gigabit connection.)

@whyrusleeping
Copy link
Member

@Ape by 11.5MB/s, you mean close to 100mbit? as in, close to maxing the connection?

ipfs has a good few performance bottlenecks that I am aware of and are on my list to fix for sure. Once we kick 0.4.0 our the door I will start on a few to reduce latencies of lookups, and then I will begin looking at making bitswap transfers more efficient.

@Ape if you're interested, i was in the middle of setting up some automated tests for varrying network conditions using docker and linux traffic control tools. If you want to help out, having some automated stuff written towards that end would be fantastic

@Ape
Copy link

Ape commented Jan 10, 2016

by 11.5MB/s, you mean close to 100mbit? as in, close to maxing the connection?

Yes, I was able to max out the connection and transfer the data with quite minimal overhead in some of my tests.

@whyrusleeping I am interested in this automatic testing and simulated network conditions. Please link me any material you have done so far.

The main difficulty is that at 100 Mbit connections the network is often not the bottleneck. CPU and disk performance are affecting the results significantly. Also, latency testing is hard to do properly without a huge network.

@whyrusleeping
Copy link
Member

@Ape sweet. I made this script that uses linux traffic control tc to set up latencies, bandwidth limits, and packet loss. Its a bit of a hack, as it affects all docker containers on the system equally. I usually run that in a vm with docker installed. I tried filing a PR against dockers libnetwork to give myself a way to more easily reference a specific container by trying to find the interface names associated with each container (currently theyre randomly assigned on container creation and impossible to query).

From there, I got bored of having a shell script, so i wrote it into a small go package and made a small automated-ish test with that here: https://github.com/whyrusleeping/ipfs-network-tests Its not super useful yet, but I think a lot of the groundwork is complete and moving forward should be easier

@Ape
Copy link

Ape commented Jan 10, 2016

Ok, there is a clear difference between 0.3.10 and 0.4.0-dev. I posted earlier that 0.4.0-dev could do 35.5 MB/s on localhost. Well, 0.3.10 cannot. Not even close.

  • 0.3.10 has a lot variance on localhost and tops out on something around 10 MB/s
  • 0.4.0-dev consistently does about 30 MB/s or so

@sivachandran I'd like to see your real network benchmarks on 0.4.0-dev.

@Ape
Copy link

Ape commented Jan 10, 2016

I found that ipfs-benchmark is a nice tool for graphing IPFS download speeds. I made some scripts for it to make testing easier. I call it ipfs-benchtools. This doesn't make testing fully automatic, but at least it makes manual testing very easy.

@Germano0
Copy link

I came to this ticket by searching on internet "ipfs slow download"
I am affected by this problem too (IPFS 0.4.4), and I have started a discussion on the mailing list
https://groups.google.com/forum/#!topic/ipfs-users/H-dxkVShilc

@alexandre1985
Copy link

IPFS isn't delivering what it promised, I also suffer from great speed performance issue. It doesn't make any sense. Torrents are better, sorry to say that

@eingenito
Copy link
Contributor

Superseded by #5723

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/perf Performance
Projects
None yet
Development

No branches or pull requests

9 participants