Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPFS Performance #5226

Closed
davinci26 opened this issue Jul 13, 2018 · 26 comments
Closed

IPFS Performance #5226

davinci26 opened this issue Jul 13, 2018 · 26 comments
Labels
topic/bitswap Topic bitswap topic/perf Performance

Comments

@davinci26
Copy link

davinci26 commented Jul 13, 2018

Hey y'all,

I was testing IPFS performance using a script that implements the following logic:

  1. Add 10MB file to IPFS daemon
  2. Create 1 to 100 IPFS daemons in different directories
  3. Do IPFS get from each daemon and measure the average time required to download the file.

I am using Go-IPFS deamon and I use and the python IPFS API to do the GET

ipfs-perf

============================= EDIT Updated Figure =============================
The X-axis is the number of IPFS nodes and the Y-axis is the average time required to download the file measured in minutes. The error bars are the standard deviation of the time

In terms of scalability I was expecting the time to be increasing logarithmic as more people request the file. The file has more seeders and more leechers.

Is this the expected performance of the system? Am I doing something wrong causing the performance of the system to deteriorate?

If this kind of results are something that interests the community, please let me know and I will provide you with the information.

@magik6k magik6k added topic/bitswap Topic bitswap topic/perf Performance labels Jul 14, 2018
@ajbouh
Copy link

ajbouh commented Jul 15, 2018

@davinci26 Hi, this looks like a cool test to run! What are the axes in your graph?

@davinci26
Copy link
Author

davinci26 commented Jul 15, 2018

@ajbouh Oooops rookie mistake. Sorry about that, the X-axis is the number of IPFS nodes and the Y-axis is the average time required to download the file measured in minutes. The error bars are the standard deviation of the time. I can also link you the script I am using to perform the calculations if you want to reproduce the graph
I will edit the OP as well for clarity

@ajbouh
Copy link

ajbouh commented Jul 15, 2018

Do you mind providing an outline of the steps your benchmark takes? Is this essentially a graph of the time it takes for the Nth ipfs node to get the same file? Are the nodes all completely connected before for the first fetch? Or is node connection time also a part of the measured file fetch time?

@davinci26
Copy link
Author

My simulation is as follows:

Run a daemon with a 10MB file // Inital file seeder
for n in range(1,90 with step 5) // n here is the number of nodes getting the file
    1. Spawn n IPFS daemons concurrently, each IPFS daemon is in its own directory
    2. For each of the n daemons =>  time IPFS get
    3. Calculate the average time required to download the file
     // t_avg = sum(ti)/n for i in range(1,n)  
    4. When everyone downloads the file close and delete the n daemons.

So each point in the graph is average time required to distribute the file to all nodes, and the X-axis is the number of nodes that try to get the file.

The nodes are brand new, so I assume that they are not connected before.

I am not sure about the answer to the second question. I am just timing the command IPFS get.

@ajbouh
Copy link

ajbouh commented Jul 15, 2018

It sounds like you might be measuring multiple things: time to be able to download any file, and then time to download the 10MB test file. In my experience, ipfs dameon takes different amounts of time finish starting up. I wonder what would happen if you "warmed up" each daemon by downloading a tiny 1-byte file, and then downloaded the 10MB file.

I'd be interested in knowing what the time for each of these numbers is: time to boot and download the tiniest file, time to download the 10MB file.

I'd also be interested in knowing the other statistics for measurements, like median, 90th percentile, etc.

@davinci26
Copy link
Author

Ill try to do the above, and ill post here my findings!!

@ajbouh
Copy link

ajbouh commented Jul 17, 2018

Awesome, @davinci26. Looking forward to it!

@Stebalien
Copy link
Member

Also note, bitswap is currently very dumb when it comes to asking for chunks. It'll ask everyone so more nodes having the file can actually slow things down.

@davinci26
Copy link
Author

@Stebalien Thanks for the info.

I changed my approach and forked ipfs-network-test, hoping that I would be able to better results.

Some observations that I made so far (I will post again when I finalize my simulation):

  • I observed the problem mentioned in #4588. Which I think is related to your comment (correct me if I am wrong)
  • IPFS as a whole program is a bit heavy. I have trouble running multiple IPFS instances on the same machine. I want to push my simulation to run around ~500 IPFS nodes (Ideally it would be 5k nodes but it would require a lot of horizontal scaling) and I am using a google cloud machine with 20 vCPUs, 27A GB memory. Currently, 100 IPFS clients is my limit.

My end goal is to compare the IPFS distribution model against the Client-Server model for distributing medium size files. From the results, I have so far I don't expect IPFS to outperform the Client-Server model, but I expect to push the system to the limit and possibly discover some useful findings which I will post here.

@Stebalien
Copy link
Member

We're working on reducing the memory usage. This will involve:

  1. Closing streams when we aren't using them. We currently don't in many cases for technical reasons but we're working on those.
  2. Storing the peerstore on-disk. See: Implement datastore-backed peerstore libp2p/go-libp2p-peerstore#28

@davinci26
Copy link
Author

The results in more presentable way:

ipfs-perf

IPFS performance deteriorates as the number of nodes increases. This is due to the increasing number of duplicate blocks. This is happenning due to the current Bitswap implementation.
ipfs_duplicates

The good thing is that IPFS has better than BitTorrent according to my simulation:
ipfs-vs-bittorent
But probably BitTorrent will outperform IPFS when the number of increase sufficiently

@ajbouh
Copy link

ajbouh commented Jul 29, 2018

@davinci26 great investigation!

A few questions:

  • What do you mean by "due to duplicate blocks"
  • Did you try a much smaller file?
  • Did you try waiting until the daemon had fully booted and fully connected to other peers?

@davinci26
Copy link
Author

@ajbouh thanks!!

  • What do you mean by "due to duplicate blocks"
    As I understand the way bitswap works right now is as follows (maybe someone can confirm this):
    Alice publishes to network a want list of blocks. Then other users participating in the exchange start transmitting data (bytes) to Alice. When the transmission finishes Alices calculates the Hash of the data received and check if it matches any of the blocks in her want list. The problem is that as the number of users increases then multiple users send Alice the same block. As a result, Alice spends her bandwidth to download the same block multiple times causing the average time required to download the file to increase. I understand that this not a trivial problem to fix.

  • Did you try a much smaller file?
    I think the trend would be similar although I havent really simulated it rigorously. For a smaller file, the number of blocks would be less and as a result, you would have fewer duplicate blocks. The issue will remain but it will be less visible.

  • Did you try waiting until the daemon had fully booted and fully connected to other peers?
    All nodes all fully loaded and connected and the time it takes to connect and boot is not included in the graphs above. The simulation code uses the IPTB framework and can be found here. The function distCmd implements my simulation and all the calculations.

@ajbouh
Copy link

ajbouh commented Jul 29, 2018 via email

@whyrusleeping
Copy link
Member

@davinci26 very nice graphs, could you share how you made them? I would love to be able to use them as a way to benchmark improvements.

On the duplicate blocks issue, We've actually laid the groundwork for fixing this issue with 'bitswap sessions', where we track which peers are likely to have a given piece of data based on information about who sends us other related pieces. The next step is to more precisely limit which peers we send those wants to, and track our hit rate (higher hit rate == more certainty, can send one want per peer, lower hit rate == maybe we should send wants to multiple peers still), response times, bandwidth estimators, and so on. We can implement a pretty simple 'want scheduler' (or better named thing) that will likely make a very noticeable impact.

@ivan386
Copy link
Contributor

ivan386 commented Jul 29, 2018

@whyrusleeping Just add one more step(have and request message as in bittorrent) between want and block sending. And do not need so many crutches.

@davinci26
Copy link
Author

@whyrusleeping thanks for the bitswap explanation.

I produced the data with my forked version of IPTB to make the file distribution simulation
Then the graphs are produced with Python and Maltplotlib using this script which parses the simulation output and produces the graphs.

I can prepare a pull request to IPTB to add the graphs and the simulation to it.

@whyrusleeping
Copy link
Member

@ivan386 you could do that, but doing so adds an extra round trip to every request, and results in a non-trivial overhead for smaller blocks. There is room for some amount of 'have' style messages though, we could definitely add them for other related blocks as part of the message, or even more interestingly, return a bitfield along with each block that indicates which of that blocks children we have (at some non-zero expense to the side sending the blocks).

@davinci26 thats really cool, i'd love to see what changes are needed in iptb to support that. We're actively working on improvements to it, this is the sort of thing we want :)

@ajbouh
Copy link

ajbouh commented Aug 2, 2018

@vmx

@songjinli
Copy link

songjinli commented Dec 27, 2018

@ajbouh thanks!!

  • What do you mean by "due to duplicate blocks"
    As I understand the way bitswap works right now is as follows (maybe someone can confirm this):
    Alice publishes to network a want list of blocks. Then other users participating in the exchange start transmitting data (bytes) to Alice. When the transmission finishes Alices calculates the Hash of the data received and check if it matches any of the blocks in her want list. The problem is that as the number of users increases then multiple users send Alice the same block. As a result, Alice spends her bandwidth to download the same block multiple times causing the average time required to download the file to increase. I understand that this not a trivial problem to fix.
  • Did you try a much smaller file?
    I think the trend would be similar although I havent really simulated it rigorously. For a smaller file, the number of blocks would be less and as a result, you would have fewer duplicate blocks. The issue will remain but it will be less visible.
  • Did you try waiting until the daemon had fully booted and fully connected to other peers?
    All nodes all fully loaded and connected and the time it takes to connect and boot is not included in the graphs above. The simulation code uses the IPTB framework and can be found here. The function distCmd implements my simulation and all the calculations.

Amazing work! but i didn't find the 'distCmd' function in the fork of IPTBhere, would you please point how to implement the simulation and get the data?

@davinci26
Copy link
Author

  1. Sorry about that I moved the code to a different branch at some point the link is this. There is also a readme file there with more information.
  2. This code was valid at the time of writing (Aug 2018) I know for a fact that IPTB did some heavy refactoring from then it may not be compatible with the current version.

Hope the above helps.

@badkk
Copy link

badkk commented Mar 28, 2019

@whyrusleeping
Can I ask is this problem solved by using bitswap session??

@Stebalien
Copy link
Member

The situation has significantly improved but there's still some work to be done. Sessions don't currently rank peers within the session to try to pick the fastest one.

@whyrusleeping
Copy link
Member

Could we get the tests that generated these graphs re-run?

@davinci26
Copy link
Author

@whyrusleeping I have the code for the graphs in this fork of IPTB. This code is more than 6 months old.

When I developed it I tried to make a PR for IPTB to add it. I agree with the IPTB folks that IPTB is not the best place for this code. With that said I still believe that there is value in these graphs/simulation for perf monitoring purposes.

Feel free to include me in any discussion if you want to add this into your codebase. I would be happy to help :)

@momack2
Copy link
Contributor

momack2 commented Feb 29, 2020

Hey Folks - I believe more up-to-date metrics for this are here: #6782 (comment). We've made a lot of improvements to Bitswap over the past 6 months that are rolling out in our next release -- addressing this exact "many seeders/leachers" performance challenge, so closing this issue to redirect the conversation/evaluation there. =]

@momack2 momack2 closed this as completed Feb 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/bitswap Topic bitswap topic/perf Performance
Projects
No open projects
Development

No branches or pull requests

9 participants