-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[node] memory problems due to large BCH blocks #2375
Comments
Wow, thanks for the detailed analysis. I've having the exact same problem. Running Bitcore on a beefy desktop server, but it's crashing when trying to process BCH testnet. |
Thanks for digging into this man, I haven't had a chance, so this really helps! I think we definitely need to harden all usages of $in. I think your solution sounds good, thanks again for looking into it |
I tried out PR #2376 but it didn't seem to fix anything. |
@christroutner the PR only implements the fix to the second problem, if you're having out of heap problems make sure that you first up the default heap size of the node as I mentioned above. Check this commit cwcrypto@3e9af7e to see how to increase the heap size, I'm currently using 8GB as my heap size, so make sure your server can afford that, or maybe try something smaller (the default is 1GB) |
I spent some time today working on this issue. I'm able to sync past the big blocks now without increasing heap size or memory. I haven't tested to make sure the wallets are still getting tagged correctly yet though. |
This problem is different from #1475 since its not caused by any sorts of memory leaks, but rather
BCH
large blocks.My
bitcore
node has been recently failing to syncBCH
ontestnet
starting from the block #1326451 (a 32MB full block), the failure is mostly due to the large number of transactions this blocks holds, more specifically its due to the way themintOps
are processed per block in the node.There are actually two problems that I noticed, both happen in the function
getMintOps
bitcore/packages/bitcore-node/src/models/transaction.ts
Line 352 in b37e0d6
The first one is that the array of
mintOps
, defined here:bitcore/packages/bitcore-node/src/models/transaction.ts
Line 363 in b37e0d6
gets really large when handling a full 32MB block (since we add a
mintOp
per output per transaction in the block), I've seen the number of items reaching to about 550K ops, and that causes the node to run out of heap and crash with the following error (reported in #1475 ):A quick fix to this problem is to override the default heap size used by the node (about 1GB) to a larger one (I've tested with
8GB
). If that's not an option, then themintOps
logic needs to be refactored so that it doesn't need to store all the mint ops at once in memory.Once I got this problem out of the way, I've noticed that the node won't run out of heap, but instead it would throw the following exception (without crashing the node):
This error will be thrown everytime the
p2p
worker forBCH
tries to sync that large block (the wholesync
process forBCH
gets stuck because of this).So After some invistigations, it turned out that this error is caused as a side effect of the large
mintOps
list and the way this db command is constructedbitcore/packages/bitcore-node/src/models/transaction.ts
Line 436 in 61ddc54
Due to the large array of
mintOps
, the set of unique addressesmintOpsAddresses
is large too, and this seems to cause mongo to have aERR_BUFFER_OUT_OF_BOUNDS
when it tries to serilize the list of addresses down the stack.So the fix I've tried is splitting the list of addresses given the
maxPoolSize
, similar to how its done herebitcore/packages/bitcore-node/src/models/transaction.ts
Lines 142 to 143 in 61ddc54
and it seems to fix the problem since this limits the size of the addresses per query to avoid any buffer problems.
Any thoughts on whether there's a better way to handle the second problem?
The text was updated successfully, but these errors were encountered: