New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance test - Add/Cat on a single IPFS instance #3131
Comments
A few different things to try that might affect performance:
|
Thanks, I'll try it out. What about 5.7GB of disk space taken by 18 thousand 1K items? Is this normal? |
@alikic no, thats not normal (or at least, not expected). I'm curious to see how the |
OK, I think I see that go-ipfs-api add() doesn't support this option. I modified shell.go in $GOPATH/src to hardcode this option for this test:
and then
It looks like it did disable pinning. We survived adding 20K items without "freezing". Also, disk usage is much lower, below 180MB for 20K items, 1K each. ADD is consistently around 6ms on average, while CAT starts at about 12ms and goes up to 130ms on average for the last batch (19k-20k). CAT performance is a concern as it shows tendency of constant grow, and we need to store much more than 20K items. Here are the results: I will get the latest from master and try it out. |
@alikic thank you for trying these out. If disabling pinning on 0.4.3 helped, i'm very interested to see how trying latest master works (with and without pinning if you have the time). Also curious to see what sort of effect |
@whyrusleeping |
0.4.3-rc3 is still making a conditional GC on cat, it was disabled in recent (115bee5) master. If you are able to built the master your self, it would be great to start from there. If you don't want to build it yourself, I can send you a binary, you have to just tell me which arch and system you are using. |
I took the latest from master: A single node test is so much better that I had to debug to see if the previously added data is actually being retrieved. Add with pinning still "freezes" the node, but the read times are now around 600us, and they don't appear to go up: Without pinning, Add flies too, and doesn't "freeze" the node. Data size on disk is just a little bit better than with 0.4.3: Now I moved to testing with two peers - adding to one, and reading from the other. ~~~In this setup, the other node usually cannot see data added to the first.~~~ UPDATE: wrong test setup, of course. Here are the results: On the side note, is there a setting that controls whether the peer blocks on hash lookup (hoping that the data might become available) or just returns with NOT FOUND status, perhaps after traversing all peers? The latter is what we need, as we want to run a closed private network where we control all peers. So, if we don't use pinning, performance is much better. But we need pinning so we can periodically garbage-collect data cached from other peers. Is there a way to work around this conflict? Can we disable caching of data retrieved from other peers, leave pinning disabled and never garbage collect? That would leave our storage with our data only. |
Thanks this is very useful. Would be great to turn this into a test suite.
|
@jbenet Yes, I can move the code from our internal repo. Should it be under "github.com/ipfs"? Not sure about prettifying reports just yet, as we have other priorities. I have two questions:
|
@alikic any public repo will do fine. if we start depending on it a ton, then we may move it. thanks |
Here it is: https://github.com/securekey/ipfs-performance-test Please try it out and let me know if you have problems. |
@jbenet Hi, have you been able to reproduce the issue with provided test code? Any plans to include some performance fixes (e.g. removing conditional GC on cat) in 0.4.3? Thanks. |
We removed conditional GC in 0.4.4-dev, 0.4.3 is already on an edge of release so we prefer to limit changes to minimum. |
@Kubuxu Thanks. What about having to disable pinning to get stable (non-freezing) and fast Add()? Do you think it can be be fixed soon? We currently rely on pinning to separate the node's "own" data from data cached from other peers. Is there any workaround to achieve this without pinning? |
@alikic we will have to debug why pinning causes that 'freezing'. Once we have a better idea of why that happens we can probably figure out a workaround |
@alikic I found your add-cat perf test suite. I had compiled together various perf test suites scattered in lots of issue disc, at https://github.com/rht/sfpi-benchmark with fancy graphs, standardizing how they should be implemented (I'm inclined toward sharness-like one for rapid writing and succinctness, and that this is how most people had already written it anyway). I wonder if you could help me port https://github.com/securekey/ipfs-performance-test ? |
Version/Platform/Processor information (from
ipfs version --all
):go-ipfs version: 0.4.3-rc3-685cd28
Repo version: 4
System version: amd64/linux
Golang version: go1.6.3
Type (bug, feature, meta, test failure, question): test failure
Area (api, commands, daemon, fuse, etc): performance
Priority (from P0: functioning, to P4: operations on fire): P3
Description:
I am running a small performance test on a single IPFS node without peers.
Steps:
The test is implemented in golang, using https://github.com/ipfs/go-ipfs-api.
Add/Cat performance is measured by starting timing just before sh.Add/sh.Cat
and stopping just after adding/reading from the stream.
The test is single-threaded, all ADD/CAT operations are sequential.
IPFS configuration is here:
config.txt
Test results are further below.
We see three issues:
and the disk usage went up by 5.7GB. I assume this is not expected?
when the storage is empty, ADD takes about 13ms, and CAT takes about 20ms
(average for the first 1000 items). As we add data, CAT performance
drops quickly. After adding about ten thousand 1K items it goes from 20ms to 200ms
on average, and after another 5-6 thousand it goes over 300ms, up to 836ms just
before I stopped the test.
I am using the latest release candidate, but we observed the performance
of CAT going down (as we add more data) with 0.4.2 too.
I have read claims that IPFS can be optimized for handling small data chunks at low
latency (https://www.reddit.com/r/ethereum/comments/3hbqbv/ipfs_vs_swarm/),
but I don't know if the person who put it out there is associated with IPFS.
So, is this possible?
Thanks
---------- results (start) -------------------
[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 5011776 14028480 27% /
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 13.667732ms Max: 47.800133ms Min: 11.235873ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 20.596004ms Max: 47.210248ms Min: 18.017563ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.117807ms Max: 37.270977ms Min: 12.291369ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 35.278503ms Max: 69.662723ms Min: 30.177991ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.508281ms Max: 44.217647ms Min: 14.533264ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 49.930275ms Max: 84.347783ms Min: 42.924832ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 19.69559ms Max: 44.971566ms Min: 16.335943ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 65.615005ms Max: 97.144663ms Min: 55.737121ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 22.436205ms Max: 51.080924ms Min: 18.401863ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 81.402915ms Max: 139.940411ms Min: 68.97214ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 24.644547ms Max: 75.510189ms Min: 20.558274ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 95.622227ms Max: 170.023458ms Min: 81.537579ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 27.226879ms Max: 86.281592ms Min: 22.393129ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 112.275953ms Max: 232.695093ms Min: 93.186463ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 29.171508ms Max: 97.310115ms Min: 24.212177ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 128.163756ms Max: 248.911895ms Min: 105.169842ms
Adding to w1 1000 X 1024 bytes: 19 %
****** IPFS "freezed" at this point, and I had to restart it. It didn't really freeze, just
the performance of ADD slowed down to the point of crawling - it took few minutes to
add 1% (10 items).
After IPFS restart:
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 14.250202ms Max: 50.711397ms Min: 11.737141ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 187.384534ms Max: 295.008167ms Min: 172.695922ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 16.326336ms Max: 78.536979ms Min: 13.28502ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 208.796685ms Max: 332.815333ms Min: 188.348064ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 18.595265ms Max: 59.661409ms Min: 15.453035ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 220.769552ms Max: 302.246728ms Min: 203.131583ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 20.731586ms Max: 91.863613ms Min: 17.253222ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 240.298857ms Max: 383.348867ms Min: 217.021817ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 23.018644ms Max: 55.909878ms Min: 18.936139ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 260.894209ms Max: 522.349637ms Min: 234.09257ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 25.623337ms Max: 99.908365ms Min: 20.95587ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 275.623074ms Max: 390.414318ms Min: 250.79436ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 28.111817ms Max: 93.7706ms Min: 22.917624ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 296.518033ms Max: 447.364316ms Min: 267.358562ms
Adding to w1 1000 X 1024 bytes: 94 %
****** IPFS "freezed" again, same as the first time, it took several minutes to go
from 94% to 95%.
After IPFS restart:
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.353887ms Max: 126.766604ms Min: 11.851121ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 836.060083ms Max: 6.196891317s Min: 788.782009ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.793028ms Max: 135.421942ms Min: 13.608686ms
Getting from w1 1000 items: 7 %
****** Stopped the test.
[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 10826124 8214132 57% /
[root@localhost 1]# ll
total 48
-rw-r--r--. 1 root root 23 Aug 26 13:56 api
drwxr-xr-x. 514 root root 12288 Aug 26 12:59 blocks
-rw-rw----. 1 root root 2913 Aug 26 12:58 config
drwxr-xr-x. 2 root root 4096 Aug 26 13:56 datastore
-rw-r--r--. 1 root root 0 Aug 26 13:56 repo.lock
-rw-r--r--. 1 root root 2 Aug 26 12:58 version
[root@localhost 1]# du . --summarize
5746360 .
---------- results (end) ---------------------
The text was updated successfully, but these errors were encountered: