Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Prevent peer flooding inv request queue (redux) (redux) #7079
Currently it's possible for peers to send duplicate invs of the same object and push the inv request time far back.
PR #4547 sought to address this, but was closed with the expectation that it would be replaced by the more comprehensive #4831 but the latter hasn't yet reached escape velocity. I agree a comprehensive rework is better, but I think we should not let a simple improvement wait for that.
PR #6306 tried a simpler version, but by reusing the setInventoryKnown it gets the logic somewhat wrong, e.g. not allowing duplicate INVs even for objects we'd mempool retired but now would be accept again.
Here I bring back #4547 and add size limiting and make an important change to the logic (which I believe was originally flawed, though no one seems to have commented on that in the original PR):
The earlier PR would remove the suppression as soon as the getdata was requested, rather than removing it when response was returned. This one removes only when we decide to not getdata (because we already have) or when the getdata returns. This means that a peer can only have one in-flight INV for a particular tx at once; which is what we want.
referenced this pull request
Nov 24, 2015
That's a bug in CTxMemPool::Expire. It should clean the tx out of all setInventoryKnowns anyway, else we won't try to re-inv it to peers when it comes in again.
The timeouts must be set very conservatively or otherwise a congested node (e.g. one under dos attack) can exhaust itself with repeated transmissions. You cannot count on the average case measured on a node with good connectivity.
I'm also not particularly interested if someone-- with a bunch of work-- can delay a transaction by a couple minutes; it's not a very interesting attack compared to self congestion collapse (which can happen absent any attacker at all). 2 minutes is the appropriate scaling from the 20 minute value used elsewhere, with respect to a the maximum size standard transaction being 1/10th the block size.