You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When processing a lot of blocks that are already finalized and beyond a given threshold M some blocks could be squashed together or omitted in other way so that they do not incur that much IO.
Potential solutions:
change the flusher in Blockchain so that if it notices that there's more than N blocks (N - history depth) it squashes N blocks applying them in the same transaction. This would reduce the number of writes greatly without changing the way blocks are flushed.
make the flusher use approach like Paprika.Importer where the memory mapped file is not written at all only to be flushed at the end
others...
The text was updated successfully, but these errors were encountered:
I think that with 1st, we can go safe and fast. Especially if the marking as finalized happens in large chunks. Squash every kN + 1 (+1 to move to another slot) and write only then
When processing a lot of blocks that are already finalized and beyond a given threshold M some blocks could be squashed together or omitted in other way so that they do not incur that much IO.
Potential solutions:
flusher
inBlockchain
so that if it notices that there's more than N blocks (N - history depth) it squashes N blocks applying them in the same transaction. This would reduce the number of writes greatly without changing the way blocks are flushed.flusher
use approach likePaprika.Importer
where the memory mapped file is not written at all only to be flushed at the endThe text was updated successfully, but these errors were encountered: