This is more of a mental note to remind me to fix these things later.
The mempool needs to be topologically sorted before being processed. Otherwise the Bloom filter updating logic doesn't work right - if transaction A which spends to the clients wallet is then spent again by transaction B and the mempool.queryHashes call gives them back in order B,A then the filter will match the latter and not the former.
It will refuse to send more invs than MAX_INV_SZ. This is set to 50k entries so it shouldn't be an actual problem anytime soon.
It does not update setInventoryKnown. If you do a mempool command at the start of a connection and then download them all, a subsequent filtered block will send you the same transactions again even though you already saw them.
The PushInventory() call already solves these problems. It probably makes sense to re-use it.
The text was updated successfully, but these errors were encountered:
This is more of a mental note to remind me to fix these things later.
The mempool needs to be topologically sorted before being processed. Otherwise the Bloom filter updating logic doesn't work right - if transaction A which spends to the clients wallet is then spent again by transaction B and the mempool.queryHashes call gives them back in order B,A then the filter will match the latter and not the former.
It will refuse to send more invs than MAX_INV_SZ. This is set to 50k entries so it shouldn't be an actual problem anytime soon.
It does not update setInventoryKnown. If you do a mempool command at the start of a connection and then download them all, a subsequent filtered block will send you the same transactions again even though you already saw them.
The PushInventory() call already solves these problems. It probably makes sense to re-use it.
The text was updated successfully, but these errors were encountered: