Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p: make block download logic aware of limited peers threshold #28120

Merged
merged 3 commits into from
Mar 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
49 changes: 33 additions & 16 deletions src/net_processing.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1451,6 +1451,7 @@ void PeerManagerImpl::FindNextBlocks(std::vector<const CBlockIndex*>& vBlocks, c
{
std::vector<const CBlockIndex*> vToFetch;
int nMaxHeight = std::min<int>(state->pindexBestKnownBlock->nHeight, nWindowEnd + 1);
bool is_limited_peer = IsLimitedPeer(peer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: const bool is_limited_peer{IsLimitedPeer};

NodeId waitingfor = -1;
while (pindexWalk->nHeight < nMaxHeight) {
// Read up to 128 (or more, if more blocks than that are needed) successors of pindexWalk (towards
Expand All @@ -1473,30 +1474,46 @@ void PeerManagerImpl::FindNextBlocks(std::vector<const CBlockIndex*>& vBlocks, c
// We consider the chain that this peer is on invalid.
return;
}

if (!CanServeWitnesses(peer) && DeploymentActiveAt(*pindex, m_chainman, Consensus::DEPLOYMENT_SEGWIT)) {
// We wouldn't download this block or its descendants from this peer.
return;
}

furszy marked this conversation as resolved.
Show resolved Hide resolved
if (pindex->nStatus & BLOCK_HAVE_DATA || (activeChain && activeChain->Contains(pindex))) {
if (activeChain && pindex->HaveNumChainTxs())
if (activeChain && pindex->HaveNumChainTxs()) {
state->pindexLastCommonBlock = pindex;
} else if (!IsBlockRequested(pindex->GetBlockHash())) {
// The block is not already downloaded, and not yet in flight.
if (pindex->nHeight > nWindowEnd) {
// We reached the end of the window.
if (vBlocks.size() == 0 && waitingfor != peer.m_id) {
// We aren't able to fetch anything, but we would be if the download window was one larger.
if (nodeStaller) *nodeStaller = waitingfor;
}
return;
}
vBlocks.push_back(pindex);
if (vBlocks.size() == count) {
return;
continue;
}

// Is block in-flight?
if (IsBlockRequested(pindex->GetBlockHash())) {
if (waitingfor == -1) {
// This is the first already-in-flight block.
waitingfor = mapBlocksInFlight.lower_bound(pindex->GetBlockHash())->second.first;
}
} else if (waitingfor == -1) {
// This is the first already-in-flight block.
waitingfor = mapBlocksInFlight.lower_bound(pindex->GetBlockHash())->second.first;
continue;
}

// The block is not already downloaded, and not yet in flight.
if (pindex->nHeight > nWindowEnd) {
// We reached the end of the window.
if (vBlocks.size() == 0 && waitingfor != peer.m_id) {
// We aren't able to fetch anything, but we would be if the download window was one larger.
if (nodeStaller) *nodeStaller = waitingfor;
}
return;
}

// Don't request blocks that go further than what limited peers can provide
if (is_limited_peer && (state->pindexBestKnownBlock->nHeight - pindex->nHeight >= static_cast<int>(NODE_NETWORK_LIMITED_MIN_BLOCKS) - 2 /* two blocks buffer for possible races */)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ea3138b

might as well jump pindex forward here, so that the second condition is satisfied?

Copy link
Member Author

@furszy furszy Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what you are referring to. Could you rephrase it please.
Want to update pindexLastCommonBlock to pindex?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what @naumenkogs is referring to here is that if this condition is true then pindex will hit this condition on the first pindex in the loop and every pindex up until this condition is no longer true. This can be more efficient by just bumping pindex to the first pindex that is past the network limited blocks threshold.

So, we can either change the loop from range based to regular for-loop and advance the index, or move this check out of this loop and into the loop on line 1464 and break early if the condition is met.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what @naumenkogs is referring to here is that if this condition is true then pindex will hit this condition on the first pindex in the loop and every pindex up until this condition is no longer true. This can be more efficient by just bumping pindex to the first pindex that is past the network limited blocks threshold.

So, we can either change the loop from range based to regular for-loop and advance the index, or move this check out of this loop and into the loop on line 1464 and break early if the condition is met.

Ok, we are on the same page now. Thanks. Let's leave it for a follow-up. It would be nice to re-think this function and benchmark it properly. In a first glance, I think we would be optimizing the function for the less likely case and may degrade performance for the likely ones.

continue;
}

vBlocks.push_back(pindex);
if (vBlocks.size() == count) {
return;
}
}
}
Expand Down
63 changes: 63 additions & 0 deletions test/functional/p2p_node_network_limited.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,12 @@
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
assert_raises_rpc_error,
try_rpc
)

# Minimum blocks required to signal NODE_NETWORK_LIMITED #
NODE_NETWORK_LIMITED_MIN_BLOCKS = 288

class P2PIgnoreInv(P2PInterface):
firstAddrnServices = 0
Expand Down Expand Up @@ -54,6 +58,63 @@ def setup_network(self):
self.add_nodes(self.num_nodes, self.extra_args)
self.start_nodes()

def test_avoid_requesting_historical_blocks(self):
self.log.info("Test full node does not request blocks beyond the limited peer threshold")
pruned_node = self.nodes[0]
miner = self.nodes[1]
full_node = self.nodes[2]

# Connect and generate block to ensure IBD=false
self.connect_nodes(1, 0)
self.connect_nodes(1, 2)
self.generate(miner, 1)
furszy marked this conversation as resolved.
Show resolved Hide resolved

# Verify peers are out of IBD
for node in self.nodes:
assert not node.getblockchaininfo()['initialblockdownload']

# Isolate full_node (the node will remain out of IBD)
full_node.setnetworkactive(False)
self.wait_until(lambda: len(full_node.getpeerinfo()) == 0)

# Mine blocks and sync the pruned node. Surpass the NETWORK_NODE_LIMITED threshold.
# Blocks deeper than the threshold are considered "historical blocks"
num_historial_blocks = 12
furszy marked this conversation as resolved.
Show resolved Hide resolved
self.generate(miner, NODE_NETWORK_LIMITED_MIN_BLOCKS + num_historial_blocks, sync_fun=self.no_op)
self.sync_blocks([miner, pruned_node])
furszy marked this conversation as resolved.
Show resolved Hide resolved

# Connect full_node to prune_node and check peers don't disconnect right away.
# (they will disconnect if full_node, which is chain-wise behind, request blocks
# older than NODE_NETWORK_LIMITED_MIN_BLOCKS)
start_height_full_node = full_node.getblockcount()
full_node.setnetworkactive(True)
self.connect_nodes(2, 0)
assert_equal(len(full_node.getpeerinfo()), 1)

# Wait until the full_node is headers-wise sync
best_block_hash = pruned_node.getbestblockhash()
self.wait_until(lambda: next(filter(lambda x: x['hash'] == best_block_hash, full_node.getchaintips()))['status'] == "headers-only")

# Now, since the node aims to download a window of 1024 blocks,
# ensure it requests the blocks below the threshold only (with a
# 2-block buffer). And also, ensure it does not request any
# historical block.
tip_height = pruned_node.getblockcount()
limit_buffer = 2
# Prevent races by waiting for the tip to arrive first
self.wait_until(lambda: not try_rpc(-1, "Block not found", full_node.getblock, pruned_node.getbestblockhash()))
for height in range(start_height_full_node + 1, tip_height + 1):
if height <= tip_height - (NODE_NETWORK_LIMITED_MIN_BLOCKS - limit_buffer):
assert_raises_rpc_error(-1, "Block not found on disk", full_node.getblock, pruned_node.getblockhash(height))
else:
full_node.getblock(pruned_node.getblockhash(height)) # just assert it does not throw an exception

# Lastly, ensure the full_node is not sync and verify it can get synced by
# establishing a connection with another full node capable of providing them.
assert_equal(full_node.getblockcount(), start_height_full_node)
self.connect_nodes(2, 1)
self.sync_blocks([miner, full_node])

def run_test(self):
node = self.nodes[0].add_p2p_connection(P2PIgnoreInv())

Expand Down Expand Up @@ -118,5 +179,7 @@ def run_test(self):
# sync must be possible, node 1 is no longer in IBD and should therefore connect to node 0 (NODE_NETWORK_LIMITED)
self.sync_blocks([self.nodes[0], self.nodes[1]])

self.test_avoid_requesting_historical_blocks()

if __name__ == '__main__':
NodeNetworkLimitedTest().main()