-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve transfer speed #2111
Comments
I also did some transfer speed benchmarks between my two servers in different data centers using lighttpd + wget as a reference. The connection was warmed up by downloading other files with IPFS first. Downloading a 373M binary file:
I think there should be room for improving the speed since the peers are directly connected to each other. EDIT: This might not be directly related to transfer speed, but I also did some latency benchmarks. I used
The latency is quite good, but there was this weird asymmetry between the directions. The asymmetry seems to be very consistent and repeatable between these two peers. EDIT2: I also did the file transfer speed benchmark to the other direction and it was much better (9.10 MB/s)! Both peers are running go-ipfs 0.3.10. |
These are great to have! If you could contribute some scripts to rerun benchmarks (ideally by plugging in hostnames/keys as variables), it would really help us improve the perf. And yes, the transfers right now are pretty bad, comparatively, due to silly systems inefficiencies all over the place. From here there's tons of optimization to get to the speed of the link itself. |
I did some additional testing and the asymmetry between IPFS directions seems to be consistently repeatable, while wget is consistently just as good (and always better than IPFS) in both directions. |
@Ape could you try these also with dev0.4.0 and utp? (im curious, i expect it to be worse per-case and better amortized over many conns, but it may be better) |
Here are the results for 0.4.0-dev (commit 1c1f9c6):
0.4.0 seems to be better especially with latency. As you expected, there was lots of variation in transfers speeds now. This is the same test as I posted earlier, and the network conditions are the same.
There is still the same asymmetry visible. The network connection should be symmetric and wget results prove it so, but the host computers are not equal. One machine has significantly faster CPU, more RAM and an SSD disk. Direction A is so that the slow machine sends and fast machine downloads, and direction B is the opposite. I will make more benchmarks with both machines being very fast if I can. |
You have to probably change it in That is my guess. |
Okay, some more tests. This time I changed the slower host for a bit faster one. I think the throughput results were previously CPU-bound (at least in direction B). They still might be, but not so badly. For some unknown reasons the latency results aren't as good as before. The underlying network latency should be very similar to the previous tests. Ping gives me 32.8 ms this time.
Some conclusions about all my tests:
|
One more test result. I spawned two IPFS 0.4.0-dev instances on one computer (the one that was the better computer in the previous tests). Then I directly connected to the other instance using localhost tcp connection. In this scenario I was able to achieve 35.5 MB/s ± 2.4 MB/s throughput. So my better CPU (i5-4670K) can handle at least that. (But this wouldn't be enough for a gigabit connection.) |
@Ape by 11.5MB/s, you mean close to 100mbit? as in, close to maxing the connection? ipfs has a good few performance bottlenecks that I am aware of and are on my list to fix for sure. Once we kick 0.4.0 our the door I will start on a few to reduce latencies of lookups, and then I will begin looking at making bitswap transfers more efficient. @Ape if you're interested, i was in the middle of setting up some automated tests for varrying network conditions using docker and linux traffic control tools. If you want to help out, having some automated stuff written towards that end would be fantastic |
Yes, I was able to max out the connection and transfer the data with quite minimal overhead in some of my tests. @whyrusleeping I am interested in this automatic testing and simulated network conditions. Please link me any material you have done so far. The main difficulty is that at 100 Mbit connections the network is often not the bottleneck. CPU and disk performance are affecting the results significantly. Also, latency testing is hard to do properly without a huge network. |
@Ape sweet. I made this script that uses linux traffic control From there, I got bored of having a shell script, so i wrote it into a small go package and made a small automated-ish test with that here: https://github.com/whyrusleeping/ipfs-network-tests Its not super useful yet, but I think a lot of the groundwork is complete and moving forward should be easier |
Ok, there is a clear difference between 0.3.10 and 0.4.0-dev. I posted earlier that 0.4.0-dev could do 35.5 MB/s on localhost. Well, 0.3.10 cannot. Not even close.
@sivachandran I'd like to see your real network benchmarks on 0.4.0-dev. |
I found that ipfs-benchmark is a nice tool for graphing IPFS download speeds. I made some scripts for it to make testing easier. I call it ipfs-benchtools. This doesn't make testing fully automatic, but at least it makes manual testing very easy. |
I came to this ticket by searching on internet "ipfs slow download" |
IPFS isn't delivering what it promised, I also suffer from great speed performance issue. It doesn't make any sense. Torrents are better, sorry to say that |
Superseded by #5723 |
I've been experimenting with go-ipfs and found that ipfs transfer is slow and it is not scaling up when the same content is available in multiple peers.
I've performed the tests on AWS EC2 instances within same region. I've used m4.large type EC2 instances which has 8GB, 2 vCPUs and moderate network bandwidth. I was running Debian Jessie on these instances. iPerf tool reported consistent 545Mbps TCP bandwidth between the instances. The IPFS version used was 0.3.11-dev.
Copying 100MB and 1G file through scp took 1.5secs and 17secs respectively. Whereas the same took 16secs and 170secs in IPFS. The numbers are average of multiple times I performed the tests. The command used to measure the transfer speed is "time ipfs cat | sha1sum". Piping the output to sha1sum is just to make sure I am getting the correct content, hashing took less than 500ms when performed separately or when the data is already present in the IPFS locally. Also there was no improvement in transfer speed if I replicate the files in more than one source peers.
You can find the network packet capture of 100MB file transfer here https://dl.dropboxusercontent.com/u/4925384/ipfs-ec2-100m.pcap.zip. Let me know if you need any other information
The text was updated successfully, but these errors were encountered: