You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The decision engine buffers the blocks in memory when it informs the message queue that there is one more block to send, this means that if thousands of transfers are being performed, every block will stay in memory till the message gets sent. A better approach would be to only read from disk when the message is going to be sent.
With this, we might loose some perf by having to go read from disk, but that is why I created this issue ipfs/js-ipfs-repo#110 to add a LRU cache on the repo.
The text was updated successfully, but these errors were encountered:
With the perf PR reads from the store are much more efficient amd will only happen when preparing for message sending. I don't believe that the suggested approach here would help regarding performance in any way.
Ok, this might be new, because it wasn't like that before perf PR, the full blocks were being passed and put in the "MessageQueue" of each peer, waiting for async to pick it up.
I don't believe that the suggested approach here would help regarding performance in any way.
You don't think that putting a LRU cache for reads in IPFS-Repo can help perf? Could you elaborate? Think about all disk reads or even worse, when the store is actually behind a network (i.e. S3)
As noted in: #76
With this, we might loose some perf by having to go read from disk, but that is why I created this issue ipfs/js-ipfs-repo#110 to add a LRU cache on the repo.
The text was updated successfully, but these errors were encountered: