Skip to content

Conversation

sipa
Copy link
Member

@sipa sipa commented Sep 21, 2025

Change the amount of transactions sent per transaction INV trickle to:

INVENTORY_BROADCAST_TARGET + queue.size() * (1 - exp(-time_in_queue / INVENTORY_AVG_TIME_IN_QUEUE))

where:

  • INVENTORY_BROADCAST_TARGET is the current INVENTORY_BROADCAST_PER_SECOND-derived constant 70.
  • INVENTORY_AVG_TIME_IN_QUEUE is a new constant (60s).
  • time_in_queue is the time since the last INV trickle.

The second term here implements an exponentially-decaying queue size with time constant INVENTORY_AVG_TIME_IN_QUEUE, so under high constant load, the queue size will converge to however many transactions arrived within that time (60s).

To account for the fact that only integral numbers of inventories can be sent, time_in_queue is adjusted so it corresponds to the since the time at which the previous actual number of transactions would have corresponding to the formula.

@DrahtBot
Copy link
Contributor

DrahtBot commented Sep 21, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33449.

Reviews

See the guideline for information on the review process.
A summary of reviews will appear here.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #30277 ([DO NOT MERGE] Erlay: bandwidth-efficient transaction relay protocol (Full implementation) by sr-gi)
  • #30116 (p2p: Fill reconciliation sets (Erlay) attempt 2 by sr-gi)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@sipa sipa changed the title txgraph: exponentially-decaying tx inventory queue Exponentially-decaying tx inventory queue Sep 21, 2025
Change the amount of transactions sent per transaction INV trickle to:

  INVENTORY_BROADCAST_TARGET + queue.size() * (1 - exp(-time_in_queue / INVENTORY_AVG_TIME_IN_QUEUE))

where:
* INVENTORY_BROADCAST_TARGET is the current INVENTORY_BROADCAST_PER_SECOND-derived constant 70.
* INVENTORY_AVG_TIME_IN_QUEUE is a new constant (60s).
* time_in_queue is the time since the last INV trickle.

The second term here implements an exponentially-decaying queue size with time
constant INVENTORY_AVG_TIME_IN_QUEUE, so under high constant load, the queue size
will converge to however many transactions arrived within that time (60s).

To account for the fact that only integral numbers of inventories can be sent, time_in_queue
is adjusted so it corresponds to the since the time at which the previous actual number
of transactions would have corresponding to the formula.
@sipa sipa force-pushed the 202509_exp_decay_inv_queue branch from 95dc0a6 to d1af551 Compare September 21, 2025 12:03
@ajtowns
Copy link
Contributor

ajtowns commented Sep 22, 2025

It's not the queue that's exponentially decaying, but the inv queue drain rate tracking the relay rate with an exponential decay? shrug

The previous PR, #33448, may be helpful for observing the behaviour of this PR in the wild. I use the following command occassionally:

bitcoin-cli getpeerinfo | jq -j '.[] | (.inv_to_send, " ", .last_inv_sequence, " ", .connection_type, " ", .id, " ", .relaytxes, " ", .synced_blocks, " ", (.network|sub("not_publicly_routable";"onion")), " inv-rx:", ((.bytesrecv_per_msg.inv//0)/1024|floor), "kB inv-tx:", ((.bytessent_per_msg.inv//0)/1024|floor), "kB gd-rx:", ((.bytesrecv_per_msg.getdata//0)/1024|floor), "kB nf-tx:", ((.bytessent_per_msg.notfound//0)/1024|floor), "kB blktxn:", ((.bytessent_per_msg.blocktxn//0)/1024|floor), "kB minfee:", (.minfeefilter*1e6|floor)/10, "s/vb ", (.subver|gsub(" |(^$)"; "_")), " ", .addr, "\n")' | sort -rn | column -t

I believe the behaviour we would ideally like here is for peers' inv queues to be fairly constrained (peaking at 500-2000 eg; values closer to 20,000 or 50,000 are likely to cause observable performance problems). The inv queue is already constrained by the size of the mempool, so extreme cases will only be observable with large, non-empty mempools.

But equally we'd like peers' INV messages to be fairly stable in size; since the network can only confirm 7-15 txs/second, there's not much value in relaying more txs than that on an ongoing basis, as many of the txs will simply expire or be rbf'ed before being confirmed. So having a non-zero queue to smooth out the number of txs we relay per second is helpful -- even delaying relay by a short period may be enough time for that tx to be rbf'ed and never need relaying by most of the network (eg, when a public auction is occurring and people are submitting bids by rbf'ing their own bids or other bids, eg for airdrop mints and Babylon staking).

I think for monitoring the latter, it might be good to add a per-peer map from txs-per-inv to number-of-invs and see how that behaves on mainnet over an extended period?

@sipa
Copy link
Member Author

sipa commented Sep 22, 2025

@ajtowns What I meant is actually the queue itself whose size decays exponentially, though only in a fairly constrained setting.

I believe the following statement holds:

If at time $T$ the size of the queue is $N$, and after that, no more than $B$ inv elements get added to the queue per trickle, then after any trickle occurring at time $T+t$, the size of the queue is bounded by $\left\lceil N \exp(-t/A) \right\rceil$, regardless of how many trickles happened in between, or when those happened.

(with $B$ = INVENTORY_BROADCAST_TARGET, $A$ = INVENTORY_AVG_TIME_IN_QUEUE.

@ajtowns
Copy link
Contributor

ajtowns commented Sep 22, 2025

Oh! I hadn't noticed this was a different formula to what (I thought) we were discussing earlier.

I think our exponential-based inv timing implies that 1% of invs will be at least 23s after the previous inv, and the formula above will mean we immediately send 32% of our current queue at that point, which seems excessive to me? Compared to the previous sim I ran, I get 45% of the max queue size (2045 vs 4500), but 270%-580% of the max inv size.

@sipa
Copy link
Member Author

sipa commented Sep 22, 2025

Needs more thought, closing for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants