Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance test - Add/Cat on a single IPFS instance #3131

Open
alikic opened this issue Aug 26, 2016 · 17 comments
Open

Performance test - Add/Cat on a single IPFS instance #3131

alikic opened this issue Aug 26, 2016 · 17 comments
Labels
status/deferred Conscious decision to pause or backlog topic/test failure Topic test failure

Comments

@alikic
Copy link

alikic commented Aug 26, 2016

Version/Platform/Processor information (from ipfs version --all):
go-ipfs version: 0.4.3-rc3-685cd28
Repo version: 4
System version: amd64/linux
Golang version: go1.6.3

Type (bug, feature, meta, test failure, question): test failure
Area (api, commands, daemon, fuse, etc): performance
Priority (from P0: functioning, to P4: operations on fire): P3

Description:

I am running a small performance test on a single IPFS node without peers.
Steps:

  1. Add N unique strings, B bytes each, save produced hashes
  2. Cat each hash from step 1.
  3. Repeat steps 1,2 R times

The test is implemented in golang, using https://github.com/ipfs/go-ipfs-api.

Add/Cat performance is measured by starting timing just before sh.Add/sh.Cat
and stopping just after adding/reading from the stream.

The test is single-threaded, all ADD/CAT operations are sequential.

IPFS configuration is here:
config.txt

Test results are further below.

We see three issues:

  1. IPFS "freezes" - At some point, ADD just crawls, and IPFS has to be restarted.
  2. Enormous disk space usage. The test stored about 18 thousand items, 1K each,
    and the disk usage went up by 5.7GB. I assume this is not expected?
  3. CAT performance drops significantly as we ADD more items. As you can see,
    when the storage is empty, ADD takes about 13ms, and CAT takes about 20ms
    (average for the first 1000 items). As we add data, CAT performance
    drops quickly. After adding about ten thousand 1K items it goes from 20ms to 200ms
    on average, and after another 5-6 thousand it goes over 300ms, up to 836ms just
    before I stopped the test.

I am using the latest release candidate, but we observed the performance
of CAT going down (as we add more data) with 0.4.2 too.

I have read claims that IPFS can be optimized for handling small data chunks at low
latency (https://www.reddit.com/r/ethereum/comments/3hbqbv/ipfs_vs_swarm/),
but I don't know if the person who put it out there is associated with IPFS.
So, is this possible?

Thanks

---------- results (start) -------------------
[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 5011776 14028480 27% /

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 13.667732ms Max: 47.800133ms Min: 11.235873ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 20.596004ms Max: 47.210248ms Min: 18.017563ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.117807ms Max: 37.270977ms Min: 12.291369ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 35.278503ms Max: 69.662723ms Min: 30.177991ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.508281ms Max: 44.217647ms Min: 14.533264ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 49.930275ms Max: 84.347783ms Min: 42.924832ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 19.69559ms Max: 44.971566ms Min: 16.335943ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 65.615005ms Max: 97.144663ms Min: 55.737121ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 22.436205ms Max: 51.080924ms Min: 18.401863ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 81.402915ms Max: 139.940411ms Min: 68.97214ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 24.644547ms Max: 75.510189ms Min: 20.558274ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 95.622227ms Max: 170.023458ms Min: 81.537579ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 27.226879ms Max: 86.281592ms Min: 22.393129ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 112.275953ms Max: 232.695093ms Min: 93.186463ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 29.171508ms Max: 97.310115ms Min: 24.212177ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 128.163756ms Max: 248.911895ms Min: 105.169842ms
Adding to w1 1000 X 1024 bytes: 19 %

****** IPFS "freezed" at this point, and I had to restart it. It didn't really freeze, just
the performance of ADD slowed down to the point of crawling - it took few minutes to
add 1% (10 items).

After IPFS restart:

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 14.250202ms Max: 50.711397ms Min: 11.737141ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 187.384534ms Max: 295.008167ms Min: 172.695922ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 16.326336ms Max: 78.536979ms Min: 13.28502ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 208.796685ms Max: 332.815333ms Min: 188.348064ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 18.595265ms Max: 59.661409ms Min: 15.453035ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 220.769552ms Max: 302.246728ms Min: 203.131583ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 20.731586ms Max: 91.863613ms Min: 17.253222ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 240.298857ms Max: 383.348867ms Min: 217.021817ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 23.018644ms Max: 55.909878ms Min: 18.936139ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 260.894209ms Max: 522.349637ms Min: 234.09257ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 25.623337ms Max: 99.908365ms Min: 20.95587ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 275.623074ms Max: 390.414318ms Min: 250.79436ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 28.111817ms Max: 93.7706ms Min: 22.917624ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 296.518033ms Max: 447.364316ms Min: 267.358562ms
Adding to w1 1000 X 1024 bytes: 94 %

****** IPFS "freezed" again, same as the first time, it took several minutes to go
from 94% to 95%.

After IPFS restart:

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.353887ms Max: 126.766604ms Min: 11.851121ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 836.060083ms Max: 6.196891317s Min: 788.782009ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.793028ms Max: 135.421942ms Min: 13.608686ms
Getting from w1 1000 items: 7 %

****** Stopped the test.

[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 10826124 8214132 57% /

[root@localhost 1]# ll
total 48
-rw-r--r--. 1 root root 23 Aug 26 13:56 api
drwxr-xr-x. 514 root root 12288 Aug 26 12:59 blocks
-rw-rw----. 1 root root 2913 Aug 26 12:58 config
drwxr-xr-x. 2 root root 4096 Aug 26 13:56 datastore
-rw-r--r--. 1 root root 0 Aug 26 13:56 repo.lock
-rw-r--r--. 1 root root 2 Aug 26 12:58 version
[root@localhost 1]# du . --summarize
5746360 .

---------- results (end) ---------------------

@alikic alikic changed the title Parformance test - Add/Cat on a single IPFS instance Performance test - Add/Cat on a single IPFS instance Aug 26, 2016
@whyrusleeping
Copy link
Member

A few different things to try that might affect performance:

  • Try out latest master. There are a few changes there that should help fight the 'slowing down' over time of adds. The issue is that your node is buffering up all of these entries so it can announce them to the network, as that buffer gets larger, everything else starts to slow down.
  • Try setting ipfs config Datastore.NoSync --json true. This will make the datastore perform fewer fsync calls when writing data to disk.
  • Run the daemon in offline mode ipfs daemon --offline. This is a good way to test out raw file performance of ipfs.
  • Run your adds with the --pin=false flag. This will tell ipfs not to pin the files youre creating, and should improve the speed quite a bit.

@alikic
Copy link
Author

alikic commented Aug 26, 2016

Thanks, I'll try it out. What about 5.7GB of disk space taken by 18 thousand 1K items? Is this normal?

@whyrusleeping
Copy link
Member

@alikic no, thats not normal (or at least, not expected). I'm curious to see how the --pin=false option affects that

@alikic
Copy link
Author

alikic commented Aug 26, 2016

OK, I think --pin=false is not useful to us, as we always want to pin when adding, but I'll try it anyways if it can provide any clue.

I see that go-ipfs-api add() doesn't support this option. I modified shell.go in $GOPATH/src to hardcode this option for this test:

    req := NewRequest(s.url, "add")
    req.Body = fileReader
    req.Opts["progress"] = "false"
    req.Opts["pin"] = "false" <--- this line added

and then

go install github.com/ipfs/go-ipfs-api
go install <our test code>

It looks like it did disable pinning. We survived adding 20K items without "freezing". Also, disk usage is much lower, below 180MB for 20K items, 1K each. ADD is consistently around 6ms on average, while CAT starts at about 12ms and goes up to 130ms on average for the last batch (19k-20k). CAT performance is a concern as it shows tendency of constant grow, and we need to store much more than 20K items.

Here are the results:
test_results.txt

I will get the latest from master and try it out.

@whyrusleeping
Copy link
Member

@alikic thank you for trying these out. If disabling pinning on 0.4.3 helped, i'm very interested to see how trying latest master works (with and without pinning if you have the time).

Also curious to see what sort of effect ipfs config Datastore.NoSync --json true has.

@alikic
Copy link
Author

alikic commented Aug 27, 2016

@whyrusleeping ipfs config Datastore.NoSync --json true might improve write performance, but write is already relatively fast and doesn't increase with the amount of data stored (at least with pinning disabled). Also, I am not sure that our use case allows for NoSync=true, we cannot afford losing data. Reading performance is more problematic as the time to read increases as data is added, and reaches unacceptable levels very quickly. Is there anything we can do to improve it?

@Kubuxu
Copy link
Member

Kubuxu commented Aug 27, 2016

0.4.3-rc3 is still making a conditional GC on cat, it was disabled in recent (115bee5) master. If you are able to built the master your self, it would be great to start from there. If you don't want to build it yourself, I can send you a binary, you have to just tell me which arch and system you are using.

@alikic
Copy link
Author

alikic commented Aug 27, 2016

I took the latest from master:
go-ipfs version: 0.4.4-dev-28b01dd
Repo version: 4
System version: amd64/linux
Golang version: go1.7

A single node test is so much better that I had to debug to see if the previously added data is actually being retrieved. Add with pinning still "freezes" the node, but the read times are now around 600us, and they don't appear to go up:
0.4.4-dev.txt

Without pinning, Add flies too, and doesn't "freeze" the node. Data size on disk is just a little bit better than with 0.4.3:
0.4.4-dev_pin_false.txt

Now I moved to testing with two peers - adding to one, and reading from the other. ~~~In this setup, the other node usually cannot see data added to the first.~~~ UPDATE: wrong test setup, of course. Here are the results:
0.4.4-dev_pin_false_read_from_another_peer.txt

On the side note, is there a setting that controls whether the peer blocks on hash lookup (hoping that the data might become available) or just returns with NOT FOUND status, perhaps after traversing all peers? The latter is what we need, as we want to run a closed private network where we control all peers.

So, if we don't use pinning, performance is much better. But we need pinning so we can periodically garbage-collect data cached from other peers. Is there a way to work around this conflict? Can we disable caching of data retrieved from other peers, leave pinning disabled and never garbage collect? That would leave our storage with our data only.

@jbenet
Copy link
Member

jbenet commented Aug 28, 2016

Thanks this is very useful. Would be great to turn this into a test suite.
Want to create a new repo and make some scripts to repro this? Would be
awesome to have graphs of the relevant things to optimize (disk usage,
speed, op time, etc)
On Fri, Aug 26, 2016 at 14:50 alikic notifications@github.com wrote:

Version/Platform/Processor information (from ipfs version --all):
go-ipfs version: 0.4.3-rc3-685cd28
Repo version: 4
System version: amd64/linux
Golang version: go1.6.3

Type (bug, feature, meta, test failure, question): question
Area (api, commands, daemon, fuse, etc): performance
Priority (from P0: functioning, to P4: operations on fire): P3

Description:

I am running a small performance test on a single IPFS node without peers.
Steps:

  1. Add N unique strings, B bytes each, save produced hashes
  2. Cat each hash from step 1.
  3. Repeat steps 1,2 R times

The test is implemented in golang, using
https://github.com/ipfs/go-ipfs-api.

Add/Cat performance is measured by starting timing just before
sh.Add/sh.Cat
and stopping just after adding/reading from the stream.

IPFS configuration is attached to this issue.

Test results are further below.

We see three issues:

IPFS "freezes" - At some point, ADD just crowls, and IPFS has to be
restarted.
2.

Enormous disk space usage. The test stored about 18 thousand items, 1K
each,
and the disk usage went up by 5.7GB. I assume this is not expected?
3.

CAT performance drops significantly as we ADD more items. As you can
see,
when the storage is empty, ADD takes about 13ms, and CAT takes about
20ms
(average for the first 1000 items). As we add data, CAT performance
drops quickly. After adding about ten thousand 1K items it goes from
20ms to 200ms
on average, and after another 5-6 thousand it goes over 300ms, up to
836ms just
before I stopped the test.

I am using the latest release candidate, but we observed the performance
of CAT going down (as we add more data) with 4.2 too.

I have read claims that IPFS can be optimized for handling small data
chunks at low
latency (https://www.reddit.com/r/ethereum/comments/3hbqbv/ipfs_vs_swarm/
),
but I don't know if the person who put it out there is associated with
IPFS.
So, is this possible?

Thanks

---------- results (start) -------------------
[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 5011776 14028480 27% /

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 13.667732ms Max:
47.800133ms Min: 11.235873ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 20.596004ms Max:
47.210248ms Min: 18.017563ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.117807ms Max:
37.270977ms Min: 12.291369ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 35.278503ms Max:
69.662723ms Min: 30.177991ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.508281ms Max:
44.217647ms Min: 14.533264ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 49.930275ms Max:
84.347783ms Min: 42.924832ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 19.69559ms Max:
44.971566ms Min: 16.335943ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 65.615005ms Max:
97.144663ms Min: 55.737121ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 22.436205ms Max:
51.080924ms Min: 18.401863ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 81.402915ms Max:
139.940411ms Min: 68.97214ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 24.644547ms Max:
75.510189ms Min: 20.558274ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 95.622227ms Max:
170.023458ms Min: 81.537579ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 27.226879ms Max:
86.281592ms Min: 22.393129ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 112.275953ms Max:
232.695093ms Min: 93.186463ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 29.171508ms Max:
97.310115ms Min: 24.212177ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 128.163756ms Max:
248.911895ms Min: 105.169842ms
Adding to w1 1000 X 1024 bytes: 19 %

****** IPFS "freezed" at this point, and I had to restart it. It didn't
really freeze, just
the performance of ADD slowed down to the point of crawling - it took few
minutes to
add 1% (10 items).

After IPFS restart:

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 14.250202ms Max:
50.711397ms Min: 11.737141ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 187.384534ms Max:
295.008167ms Min: 172.695922ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 16.326336ms Max:
78.536979ms Min: 13.28502ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 208.796685ms Max:
332.815333ms Min: 188.348064ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 18.595265ms Max:
59.661409ms Min: 15.453035ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 220.769552ms Max:
302.246728ms Min: 203.131583ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 20.731586ms Max:
91.863613ms Min: 17.253222ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 240.298857ms Max:
383.348867ms Min: 217.021817ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 23.018644ms Max:
55.909878ms Min: 18.936139ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 260.894209ms Max:
522.349637ms Min: 234.09257ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 25.623337ms Max:
99.908365ms Min: 20.95587ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 275.623074ms Max:
390.414318ms Min: 250.79436ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 28.111817ms Max:
93.7706ms Min: 22.917624ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 296.518033ms Max:
447.364316ms Min: 267.358562ms
Adding to w1 1000 X 1024 bytes: 94 %

****** IPFS "freezed" again, same as the first time, it took several
minutes to go
from 94% to 95%.

After IPFS restart:

Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 15.353887ms Max:
126.766604ms Min: 11.851121ms
Getting from w1 1000 items: 100 %
Agent: w1 Operation Get 1000 X 1024 bytes. Average: 836.060083ms Max:
6.196891317s Min: 788.782009ms
Adding to w1 1000 X 1024 bytes: 100 %
Agent: w1 Operation Add 1000 X 1024 bytes. Average: 17.793028ms Max:
135.421942ms Min: 13.608686ms
Getting from w1 1000 items: 7 %

****** Stopped the test.

[root@localhost ipfs-test]# df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 19040256 10826124 8214132 57% /

config.txt https://github.com/ipfs/go-ipfs/files/439911/config.txt

[root@localhost 1]# ll
total 48
-rw-r--r--. 1 root root 23 Aug 26 13:56 api
drwxr-xr-x. 514 root root 12288 Aug 26 12:59 blocks
-rw-rw----. 1 root root 2913 Aug 26 12:58 config
drwxr-xr-x. 2 root root 4096 Aug 26 13:56 datastore
-rw-r--r--. 1 root root 0 Aug 26 13:56 repo.lock
-rw-r--r--. 1 root root 2 Aug 26 12:58 version
[root@localhost 1]# du . --summarize
5746360 .
---------- results (end) ---------------------


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3131, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIcoaYo6aspf2tNnTHOJeE7Akj1y-xYks5qjzVtgaJpZM4JuXEO
.

@em-ly em-ly added the topic/test failure Topic test failure label Aug 29, 2016
@alikic
Copy link
Author

alikic commented Aug 29, 2016

@jbenet Yes, I can move the code from our internal repo. Should it be under "github.com/ipfs"? Not sure about prettifying reports just yet, as we have other priorities.

I have two questions:

  1. I see the direct dependency between time to cat() and the amount of data in store, and the fact that it goes up relatively fast, as a critical issue. Could you include the fix (removing a conditional GC on cat) in 0.4.3?
  2. Having to remove pinning to achieve good add() performance is a problem for us, for two reasons:
    2.a) In our model, an entity which hosts a peer keeps its own data pinned, and periodically GCs other's data that gets cached as the entity cats hashes residing on other nodes. Is there another way to achieve separation of the entity's own data and cached data from other peers?
    2.b) When we want to remove an item from storage, we unpin it and then call GC. Is there a way to remove an item that is not pinned and not use GC (so we don't remove all items, as now nothing will be pinned)?

@jbenet
Copy link
Member

jbenet commented Aug 30, 2016

@alikic any public repo will do fine. if we start depending on it a ton, then we may move it. thanks

@alikic
Copy link
Author

alikic commented Sep 2, 2016

Here it is: https://github.com/securekey/ipfs-performance-test

Please try it out and let me know if you have problems.

@alikic
Copy link
Author

alikic commented Sep 7, 2016

@jbenet Hi, have you been able to reproduce the issue with provided test code? Any plans to include some performance fixes (e.g. removing conditional GC on cat) in 0.4.3? Thanks.

@Kubuxu
Copy link
Member

Kubuxu commented Sep 7, 2016

We removed conditional GC in 0.4.4-dev, 0.4.3 is already on an edge of release so we prefer to limit changes to minimum.

@alikic
Copy link
Author

alikic commented Sep 8, 2016

@Kubuxu Thanks. What about having to disable pinning to get stable (non-freezing) and fast Add()? Do you think it can be be fixed soon? We currently rely on pinning to separate the node's "own" data from data cached from other peers. Is there any workaround to achieve this without pinning?

@whyrusleeping
Copy link
Member

@alikic we will have to debug why pinning causes that 'freezing'. Once we have a better idea of why that happens we can probably figure out a workaround

@whyrusleeping whyrusleeping added the status/deferred Conscious decision to pause or backlog label Sep 14, 2016
@rht
Copy link
Contributor

rht commented Jan 31, 2017

@alikic I found your add-cat perf test suite. I had compiled together various perf test suites scattered in lots of issue disc, at https://github.com/rht/sfpi-benchmark with fancy graphs, standardizing how they should be implemented (I'm inclined toward sharness-like one for rapid writing and succinctness, and that this is how most people had already written it anyway). I wonder if you could help me port https://github.com/securekey/ipfs-performance-test ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/deferred Conscious decision to pause or backlog topic/test failure Topic test failure
Projects
No open projects
Development

No branches or pull requests

6 participants