Skip to content
This repository has been archived by the owner on Feb 24, 2021. It is now read-only.

Make js-libp2p-secio faster so that js-ipfs is faster 馃殌 #96

Closed
daviddias opened this issue Jan 7, 2018 · 11 comments
Closed

Make js-libp2p-secio faster so that js-ipfs is faster 馃殌 #96

daviddias opened this issue Jan 7, 2018 · 11 comments

Comments

@daviddias
Copy link
Member

I've expanded secio's benchmarks -- https://github.com/libp2p/js-libp2p-secio/blob/master/benchmarks/send.js --. The results (on my machine) are:

禄 npm run benchmark

> libp2p-secio@0.9.0 benchmark /Users/imp/code/js-libp2p-secio
> node benchmarks/send.js

create peers for test x 782 ops/sec 卤1.72% (77 runs sampled)
establish an encrypted channel x 130 ops/sec 卤1.83% (75 runs sampled)
send plaintext 10 x 262144 bytes x 2,529 ops/sec 卤2.02% (34 runs sampled)
send encrypted 10 x 262144 bytes x 33.95 ops/sec 卤1.76% (79 runs sampled)
send plaintext 100 x 262144 bytes x 309 ops/sec 卤1.56% (18 runs sampled)
send encrypted 100 x 262144 bytes x 4.29 ops/sec 卤1.64% (25 runs sampled)
send plaintext 1000 x 262144 bytes x 31.03 ops/sec 卤1.68% (71 runs sampled)
send encrypted 1000 x 262144 bytes x 0.42 ops/sec 卤3.56% (7 runs sampled)

secio is currently one of the areas that consume most memory and time during the execution of a js-ipfs node. Getting a performance boost in this module (or libp2p-crypto) will bring a significant boost to js-ipfs as well.

@dignifiedquire
Copy link
Member

@diasdavid in your experience is this more of an issue in node.js or browser land, or similar?

@pgte
Copy link
Contributor

pgte commented May 29, 2018

Perhaps you'd like to capture some flamegraphs? If so, you could try running it through 0x.

@mkg20001
Copy link
Member

mkg20001 commented Nov 7, 2019

I did some benchmarks

master (my machine):

create peers for test x 303 ops/sec 卤22.23% (74 runs sampled)
establish an encrypted channel x 103 ops/sec 卤1.26% (77 runs sampled)
send plaintext 10 x 262144 bytes x 3,575 ops/sec 卤2.81% (71 runs sampled)
send encrypted 10 x 262144 bytes x 27.70 ops/sec 卤1.87% (65 runs sampled)
send plaintext 100 x 262144 bytes x 482 ops/sec 卤3.55% (46 runs sampled)
send encrypted 100 x 262144 bytes x 4.05 ops/sec 卤1.95% (24 runs sampled)
send plaintext 1000 x 262144 bytes x 49.43 ops/sec 卤1.83% (75 runs sampled)
send encrypted 1000 x 262144 bytes x 0.45 ops/sec 卤1.36% (7 runs sampled)

async/await & async iterators (my machine):

馃 drumroll

create peers for test x 4,966 ops/sec 卤2.52% (61 runs sampled)
establish an encrypted channel x 6,196 ops/sec 卤4.04% (74 runs sampled)
send plaintext 10 x 262144 bytes x 21,165 ops/sec 卤1.96% (77 runs sampled)
send encrypted 10 x 262144 bytes x 6,195 ops/sec 卤3.47% (72 runs sampled)
send plaintext 100 x 262144 bytes x 21,107 ops/sec 卤28.60% (5 runs sampled)
send encrypted 100 x 262144 bytes x 6,268 ops/sec 卤3.02% (76 runs sampled)

@mkg20001
Copy link
Member

mkg20001 commented Nov 7, 2019

Seems like this problem should magically go away once we use async iterators

@daviddias
Copy link
Member Author

Readable Streams have been indeed historically slow.

@mkg20001 make sure to run tests all the way to ipfs/interop to not miss anything

@alanshaw
Copy link
Member

send encrypted 100 x 262144 bytes

...from 4/sec to 6,000/sec? No way...I don't believe it.

@mkg20001
Copy link
Member

...from 4/sec to 6,000/sec? No way...I don't believe it.

There could be some trouble with the way benchmark.js handles async testing, as it will often launch stuff in parallel.

Maybe v8 applies too much magic and skews the results.

Maybe async overload also skews some timeouts.

Or... maybe it's really that fast

We'll see once it's in master. But faster it is, that's for sure! 馃帀

@daviddias
Copy link
Member Author

@mkg20001 have you npm link'ed this new version into js-ipfs to run its tests and also run the ipfs/interop tests?

@jacobheun
Copy link
Contributor

@mkg20001 have you npm link'ed this new version into js-ipfs to run its tests and also run the ipfs/interop tests?

That won't be able to happen until we start getting libp2p integrated into js-ipfs. The async changes will need to be incorporated before we can do the full interop tests, as the interfaces have changed.

@daviddias
Copy link
Member Author

That's scary. Is there a way that js-libp2p is testing itself against the other implementations that we can safely rely on (e.g. that would replace the need to run the interop tests)?

@jacobheun
Copy link
Contributor

We have interop tests for js and go libp2p that we will be running earlier, https://github.com/libp2p/interop. We'll mitigate any risk and test things thoroughly before we go live, including running the full gambit of tests for ipfs.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants