Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for a new mempool design #27677

Open
sdaftuar opened this issue May 16, 2023 · 13 comments
Open

Proposal for a new mempool design #27677

sdaftuar opened this issue May 16, 2023 · 13 comments

Comments

@sdaftuar
Copy link
Member

sdaftuar commented May 16, 2023

@sipa and I gave a presentation to some other developers recently about some work we've been doing to redesign how the mempool works, and I wanted to share a writeup outlining our thoughts with a broader audience to further the discussion. I've also attached a PDF of slides that go along with this: Reinventing the Mempool.pdf.


Summary

The current mempool is primarily designed around maintaining two different sort orders, where for each transaction we keep track of its position in an ancestor-feerate-sort and in a descendant-feerate-sort. We then use those two cached sort orders in order to implement eviction and mining.

Those algorithms are deficient in several ways. In particular, the eviction algorithm (which removes the transaction with lowest descendant feerate, along with its descendants) does not guarantee that the first transaction evicted would be the last transaction mined, and there exist situations where we could evict a transaction that would otherwise actually be mined in the next block.

Furthermore, the mining algorithm is more complex than the ancestor feerate sort we maintain, and it is insufficient to merely do lookups into our ancestor feerate index to determine which of two transactions will be mined from the mempool first. This lack of a total ordering on mempool transactions makes RBF calculations difficult to align with miner incentives, and creates distortions where we sometimes can replace transactions via RBF that would actually be better for a miner than the replacement.

To solve all these problems, we propose maintaining a total ordering on mempool transactions, which we use to implement a mining and eviction algorithm that are symmetrically opposite and provide better RBF rules for determining when to allow a replacement to occur.

Background

Eviction and mining are not opposite

Sometimes our eviction criteria might remove something that would be mined in the next block from our mempool. Consider the following transaction graph (with (ancestor, descendant) feerates in sat/byte in parentheses):

graph TD;
    A["Tx A: size 250, fee 250 (1, 2)"]-->B;
    A-->C;
    A-->E["Tx E: size 250, fee 49750 (100, 199)"];
    B["Tx B: size 50000, fee 50000 (1, 2.02)"]-->D["Tx D: size 500, fee 52000 (1.51, 104)"];
    C["Tx C: size 50000, fee 50000 (1, 2.02)"]-->D;
Loading

Note that the ancestor feerate of E is 100 sat/byte, while the descendant feerate of A is 2 sat/byte. In particular, the descendant feerate of A is lowest among all transactions in this diagram, so if it were also the lowest descendant feerate package in the mempool, then we might evict A, and therefore E, even though (A, E) might otherwise be selected for the next block.

RBF is not incentive compatible

There are many forms of incentive incompatibility baked into the BIP 125 replacement rules:

  • The "no-new-parents" rule doesn't consider the replacement transaction's desirability to a miner -- it might be better than all the to-be-replaced transactions, despite the new unconfirmed parent.
  • The rule requiring that a new transaction pays a higher feerate than its direct conflicts is insufficient to ensure that the new transaction is more desirable than what is being evicted. One of the direct conflicts could have a smaller set of unconfirmed parents than the new transaction, resulting in a higher ancestor feerate. Moreover, because we don't compare feerates of any indirect conflicts -- that is, descendants of direct conflicts -- with the feerate of the new transaction, it's possible that we're evicting a child transaction that would otherwise be mined in the next block with something that will not be mined for a long time.

See #26451 for more discussion of the second point.

Proposed solution

If we're able to maintain a total ordering on the mempool, then we can design efficient algorithms that will solve these problems.

How to maintain a total ordering

With the current algorithm and rules, it is infeasible to continuously maintain a fully-ordered mempool at all times. Ordering an $n$-transaction mempool from scratch scales with $O(d n \log n)$, where $d$ is the descendant limit (default 25), which quickly becomes too much when the mempool grows beyond a trivial size. And unfortunately, doing this incrementally (just re-ordering affected transactions whenever updates happen) does not help, because even a single added transaction can change the ordering of all remaining ones.

[Edited to use a better example] As an example of this effect, consider the following transaction graph (all transactions are same size, with ancestor feerates of children in parentheses):

graph TD;
    A[Tx A: fee 16]-->D["Tx D: fee 17 (16.5)"];
    A-->E["Tx E: fee 19 (16)"];
    B[Tx B: fee 13]-->E;
    B-->F["Tx F: fee 23 (14.67)"];
    C[Tx C: fee 8]-->F;
Loading

With these ancestor feerates, Tx D should be included first along with its parent Tx A. Once Tx A is included, Tx E will have the next highest ancestor feerate and it is included next with its parent, and so on to produce the ordering: [A, D, B, E, C, F].

Now consider a single transaction being added, Tx G:

graph TD;
    A[Tx A: fee 16]-->D["Tx D: fee 17 (16.5)"];
    A-->E["Tx E: fee 19 (16)"];
    B[Tx B: fee 13]-->E;
    B-->F["Tx F: fee 23 (14.67)"];
    C[Tx C: fee 8]-->F;
    C-->G["Tx G: fee 29 (18.5)"]
Loading

Transaction G now has the highest ancestor feerate, and so it would be chosen first in our ancestor-feerate-based ordering. Once Tx G and its parent Tx C are included, Tx F will see its ancestor feerate increase (as it only has one remaining ancestor) to 18, making it the next best to include (along with its parent Tx B). Once Tx B is in, Tx E has the next highest ancestor feerate and we can see that the final ordering of transactions is: [C, G, B, F, A, E, D]. Note that the order in which the child transactions appear has been reversed, and moreover, the number of transactions in this graph could be extended arbitrarily while retaining this behavior.

Fundamentally this means that we need to bound the number of transactions whose ordering can be affected by individual changes to the mempool. Our main insight is that this number is exactly the size of the largest "cluster" of transactions that we might have in the mempool, where a "cluster" is a maximal set of transactions that are connected via ancestor/descendant relationships in the transaction graph.

Our proposal is thus to introduce a limit on the size of clusters in the mempool. It then becomes feasible to maintain an ordering for all the individual clusters at all times, and perform a quick merge sort over all of them whenever a total ordering of the entire mempool is needed.

Definitions

Clusters

We associate the transactions in the mempool with a graph, where each transaction corresponds to a vertex and two transactions are adjacent (ie have an edge between them) if one spends any outputs of the other. We say that two transactions are in the same cluster if there exists a path of edges between them. The overall graph of transactions is partitioned into disjoint clusters, where each transaction is in exactly one cluster.

Linearizations

We introduce the term "linearization" to refer to any topologically valid ordering of transactions, that is, any ordering of transactions that would be valid in a block under Bitcoin's consensus rules. This could be an ancestor-feerate-based ordering of transactions (like we use in the mining code today), or it could be the optimal ordering (one where we choose amongst all possible subsets of transactions the topologically valid subset that maximizes feerate, and repeat until all transactions are selected). Or it could be something in between; for the purposes of discussion we abstract away the details of the cluster ordering algorithm, or "linearization", that we might settle on.

Chunks

Our intuition for cluster linearization algorithms is that we often will choose transactions with lower feerate before we choose transactions with higher feerate, because of topology constraints. This is a consequence of behaviors like "child- pays-for-parent". So when thinking about how we might compare transactions from two different clusters to determine which one is "better" (eg for mining) or "worse" (eg for eviction), we don't just want to look at the first or last transaction in the cluster. Instead, our intuition is that we want to look at the same package of transactions that our ordering algorithm would have used in coming up with the ordering in the first place.

This concept, that transactions should be looked at in some aggregate with other transactions, can be generalized in a way that is indifferent to the underlying linearization algorithm that is used. For a given linearized cluster $[tx_1, ..., tx_n]$, we can partition the cluster into a series of disjoint "chunks" $[c_1, c_2, ..., c_j]$ where $c_1 = [tx_1, ..., tx_{i_1}]$, $c_2 = [tx_{i_1+1}, ..., tx_{i_2}]$ ,..., $c_j = [tx_{i_{j-1}+1}, ..., tx_n]$, such that the feerate of each chunk is no greater than the feerate of the chunk that precedes it.

Another way of putting it: chunks are the partitioning of a cluster that respects the linearization in a way that ensures that feerates from one chunk to the next are monotonically decreasing.

(Note that due to ties there can be more than one chunking that is compatible with the given definition; generally we'll prefer smaller chunks and therefore not merge together chunks with the same feerate.)

Given any cluster linearization, it turns out that it's possible to construct the chunks for the cluster in linear time, and this operation is independent of the linearization algorithm that may have been used.

Implementing fundamental mempool operations

For the sake of discussion below, let's now imagine that our mempool is grouped into its different clusters, and each cluster is linearized using some algorithm (perhaps the ancestor-feerate-based mining algorithm in use today, or perhaps an optimal ordering), and those linearized clusters are each chunked into their monotonically decreasing feerate order. Given such a mempool, we can design algorithms to implement the fundamental mempool operations.

Mining

Our goal with mining is to maximize the fees in a block template, and we do so by applying a greedy algorithm.

  1. Make a highest-feerate-sorted heap consisting of the first chunk from each cluster. Note that these will be each cluster's highest feerate chunk.
  2. While the candidate block is not yet full:
    i. Remove the top of the heap and add to the block candidate.
    ii. If the cluster from which the selection was made has more chunks left, insert the next best chunk from that cluster into the heap.

Note that when we approach the end of a block, we run into a knapsack problem where chunks may not fit. How we might optimize the end of a block is deferred for later discussion.

Eviction

Our goal with eviction is to remove the transactions that would be the last ones selected by our mining algorithm. With a fully ordered mempool, this calculation is straightforward and mirrors the mining algorithm.

  1. Make a lowest-feerate-sorted heap consisting of the last chunk from each cluster. Note that these will be each cluster's lowest feerate chunk.
  2. While the mempool is over its size limit:
    i. Remove the top of the heap and evict from the mempool.
    ii. If the cluster from which the selection was made has more chunks left, insert the next lowest feerate chunk from that cluster into the heap.

As with our current eviction algorithm, we will also bump up the mempool's minimum fee to be an increment above the highest feerate chunk that was evicted.

RBF

Our goal with RBF is to allow a "better" transaction (something a miner would prefer to have) to evict some existing, conflicting transactions in a way that is DoS-resistant but is otherwise as permissible as practical.

Given a mempool where all clusters are ordered and chunked, we can assign a score to a transaction which is the chunk feerate of the chunk to which it belongs. Then our main RBF incentive compatibility rule just becomes:

  1. A replacement transaction must have a chunk feerate greater than that of all transactions which it would evict.

This would eliminate the BIP 125 rule against replacements introducing new unconfirmed parents and replaces the BIP 125 feerate rule in which a replacement transaction must have a higher individual feerate than all direct conflicts.

However, at least for now, we would retain some of the additional anti-DoS rules:

  1. A replacement transaction must pay more total fees than the sum of fees paid by all evicted transactions.
  2. A replacement transaction must not evict more than (100?) existing transactions.

The first of those rules prevents free-relay attacks, while some version of the second rule is necessary to prevent us from needing to re-cluster/re-order too much of the mempool while processing a single transaction.

Some open questions

  1. Philosophically, is it problematic for RBF rules to be even more of a "black box" than the BIP 125 algorithm?
  2. Are there usage patterns for wallets or other protocols that would be fundamentally harmed by cluster size limits? Do we know what values would be acceptable or what values would be insufficient?
  3. What can we do if a high feerate transaction arrives that would violate the cluster limit? Can we design some sort of "cluster-sibling eviction" algorithm that is DoS-resistant and would help reduce the downsides of cluster size limits?
  4. Is it problematic if linearization algorithms used on the network are non-deterministic?
    • We could imagine using a variety of linearization algorithms at any given moment -- perhaps for small clusters we use an exponential run-time optimal ordering algorithm, while for medium sized clusters we do something different, like an ancestor-feerate-based ordering, or an optimal ordering with some bounded computation limits.
    • Are there situations where different nodes having different orders causes meaningful user-facing issues with transactions not relaying effectively, eg because of the use of chunk feerates in the RBF algorithm?
  5. Chunk sizes should be bounded in vbytes, so that we don't exacerbate the knapsack problem during mining. However, what is the best way to do that?
    • We could just bound the vbytes allowed in a single cluster, which would guarantee that chunks are not too big.
    • Or we could design linearization algorithms and adapt our chunking logic to never produce chunks bigger than a certain size -- but this would come at the expense of chunks within a cluster no longer having monotonically decreasing feerates. This may be possible to do, but is the need for large clusters (measured in vbytes) great enough to warrant the complexity that this would introduce in our algorithms?

Current status and next steps

  1. First draft implementation of this logic is done (link to be added).
  2. Analyzing effects of this logic on historical transaction data is in process, to try to answer questions like:
    • How big are the clusters we've historically seen / what is the cluster size distribution on the network today
    • How would the network have been affected if we'd had cluster sizes bounded at various different values in the past
  3. Performance analysis of all the mempool operations (particularly as a function of cluster size) to ensure that mempool operations are optimized with this approach.
  4. The information gathered will (eventually) inform a concrete proposal for what cluster size limits we might consider (along with chunk vbyte and/or cluster vbyte limits).
@Sjors
Copy link
Member

Sjors commented May 17, 2023

Nice! Here's the transcript of the above mentioned presentation: https://btctranscripts.com/bitcoin-core-dev-tech/2023-04-25-mempool-clustering/

It's also worth noting that although this proposal introduces a new limit, for the cluster size, it could potentially supersede some other limits (e.g. -limitancestorcount, -limitancestorsize, -limitdescendantcount, -limitdescendantsize) if the maximum cluster size is large enough. Though if the limit is in vbytes, then someone assuming a limit in count might still hit a non backwards compatible policy limit.

@t-bast
Copy link
Contributor

t-bast commented May 17, 2023

Thanks for sharing all those details, this is great!
I can only comment as someone who relies a lot on mempool/transaction relay for lightning: my opinion may thus be biased.
I believe that lightning's security will always depend on the ability to get specific transactions confirmed before a given block, so anything that can help transaction propagation is important for us (and I believe this will be true for most L2 contracts).

Philosophically, is it problematic for RBF rules to be even more of a "black box" than the BIP 125 algorithm?

It isn't for me, as long as we can somehow RBF once we're ready to pay the price.
The way lightning implementations currently RBF is via trial-and-error, not by trying to abide by the BIP 125 rules.
We simply lower the change output of the transaction we're trying to RBF and/or add another input, try broadcasting, and try again with more fees if bitcoind rejects it (or if we still don't see our transaction confirming).
So we're already treating RBF as a black box, and don't care about making it "more" of a black box.

Are there usage patterns for wallets or other protocols that would be fundamentally harmed by cluster size limits? Do we know what values would be acceptable or what values would be insufficient?

Most L2 contracts will only start once the "contract transaction" (channel funding transaction in the lightning case) is confirmed.
In the lightning case, the cluster we generate currently consists of:

  • a commitment transaction (no unconfirmed ancestors)
  • a CPFP child for that commitment transaction (which may bring its set of unconfirmed ancestors)

The commitment transaction may be as large as MAX_STANDARD_TX_WEIGHT, so we want the cluster size to allow that (but it's obvious that the limit needs to be higher than that anyway). Since L2 contracts generate trees of transactions, I believe they should always be able to get away with confirming each level of that tree independently, which means they only ever need one transaction and one CPFP child, nothing more (but we can probably make those contracts more fee-efficient if the cluster size allows more than that). @instagibbs does that fundamentally change with Eltoo? I guess not.

The CPFP child brings its own cluster with it, so the larger the cluster size limit is, the more of our utxos we're able to use to create our CPFP transaction. The limits chosen here will have to be arbitrary, and wallets will simply have to make sure their utxos confirm to be able to use them for CPFP. This is already the case because of the descendant limits, so it shouldn't be fundamentally more harmful?

@instagibbs
Copy link
Member

does that fundamentally change with Eltoo? I guess not.

No, all pin-avoiding designs I've thought of are 0-fee parent, CPFP-ing child.

@Sjors
Copy link
Member

Sjors commented May 17, 2023

The commitment transaction may be as large as MAX_STANDARD_TX_WEIGHT, so we want the cluster size to allow that (but it's obvious that the limit needs to be higher than that anyway).

MAX_STANDARD_TX_WEIGHT is 400,000 weight units. If 65 byte transactions are (made) standard, and assuming those are at least 100 weight units, that's a maximum of ~4000 per cluster. My understanding was that doing an optimal sort for a given cluster, requires it to have a few dozen transactions. Though less than perfect sort may be fine. 1/10th of a block sounds a fairly big though.

@sipa
Copy link
Member

sipa commented May 17, 2023

My understanding was that doing an optimal sort for a given cluster, requires it to have a few dozen transactions.

It's the opposite really. A cluster with 1 transaction is always optimally ordered (because only one order exists). The bigger a cluster (in number of transactions) is, the harder it is to find the optimal ordering. Up to 15-20 transactions we may be able to find it using an exponential-time algorithm; above that we need to either just use the ancestor-feerate based order, or some heuristical algorithm that may approach optimality.

@sdaftuar
Copy link
Member Author

@t-bast Thanks for your comments!

The commitment transaction may be as large as MAX_STANDARD_TX_WEIGHT, so we want the cluster size to allow that (but it's obvious that the limit needs to be higher than that anyway).

FYI -- I was thinking that the easiest approach here would be if we can cap cluster vbytes to the ancestor/descendant size limits of 101kvB, or just above the max standard transaction weight. This would have the benefit of not making block construction any less efficient (due to the knapsack problem that occurs as we approach the end of a block).

The limits chosen here will have to be arbitrary, and wallets will simply have to make sure their utxos confirm to be able to use them for CPFP. This is already the case because of the descendant limits, so it shouldn't be fundamentally more harmful?

Yes, the comparison to descendant limits is appropriate I think -- my belief is that any use case that would be impacted by a limit on the number of transactions in the same cluster would also have been affected by the descendant count limit, which is outside a single user's control (ie if you're trying to spend the output of an unconfirmed transaction, unless the transaction's outputs are all under your control, you can't be sure that someone else hasn't chained a bunch of transactions to bump up against the descendant limit and prevent your spend from being relayed). So fundamentally I don't think a limit on number of transactions in a cluster ought to be raising any issues that don't already exist...

@instagibbs
Copy link
Member

instagibbs commented May 18, 2023

Having the number of allowed evictions be a function of the descendant limit makes replacements easier to reason about from a pinning perspective. e.g., limit of 25/100 means that you can batch CPFP 4 transactions safely, ignoring other pinning vectors(we'll get around to those!). Another smaller point is currently people opting into larger descendant limits for whatever reason can cause this to fall to possibly 0. i.e. a chain of 101 txns that can't have eldest tx evicted.

"cluster-sibling eviction" algorithm that is DoS-resistant and would help reduce the downsides of cluster size limits?

going to call this "cluster limit pinning" to parallel package limit pinning, which is already a pinning problem today, which is what Ephemeral Anchors are intended to solve. Will think on this...

@sdaftuar
Copy link
Member Author

Having the number of allowed evictions be a function of the descendant limit makes replacements easier to reason about from a pinning perspective. e.g., limit of 25/100 means that you can batch CPFP 4 transactions safely, ignoring other pinning vectors(we'll get around to those!). Another smaller point is currently people opting into larger descendant limits for whatever reason can cause this to fall to possibly 0. i.e. a chain of 101 txns that can't have eldest tx evicted.

Thanks for mentioning this. If indeed the only concern with a lot of evictions is due to the number of clusters that would need to be re-sorted, then we could either have the limit be a maximum number of direct conflicts (since indirect conflicts are always in the same cluster as one of the direct conflicts), or we could just have the limit explicitly be in terms of the number of clusters that would be affected, rather than the number of transactions that are in conflict.

Right now I'm imagining that the descendant limits (on count and vbytes) would go away entirely. I tend to think that we should continue to enforce that no transaction have more than 101kvb of size-with-ancestors (either explicitly, or as a consequence of a maximum cluster size if that gets set to the same value). It's unclear to me whether we should retain the ancestor count limit, because I think it might have an impact on the runtime of the ancestor-feerate-based linearization algorithm that I suspect will be part of the initial implementation, so this will be something to revisit in the future.

@instagibbs
Copy link
Member

instagibbs commented May 19, 2023

Either conflict limit sounds good to me. N direct conflicts or N clusters has the same result: Someone trying to avoid "rule#5" type pinning will only try to CPFP up to N transactions, I believe.

It also means there's a fundamental tension here between CPFP/RBF strategies and how large we allow clusters to be. Smaller clusters means we can allow more clusters to be modified, likely?

@Sjors
Copy link
Member

Sjors commented May 20, 2023

My understanding was that doing an optimal sort for a given cluster, requires it to have a few dozen transactions.

It's the opposite really. [...] Up to 15-20 transactions we may be able to find it using an exponential-time algorithm; above that [...]

I meant to say "no more than a few dozen transactions".

So I guess we're ok with falling back to less than optimal sorting within each cluster?

@ariard
Copy link
Contributor

ariard commented Jun 1, 2023

From my understanding of the proposal, there is an intuition of aligning mining (i.e the maximum fees in a block template)
and transactions evicted from our local mempools. The proposed changes goals would be to make the transactions that would be the last ones selected by our mining algorithm, the first ones to be removed from the mempool based e.g on an ancestor-feerate-based mining algorithm.

While this symmetrization of block template construction and mempool eviction should work in light of today mining algorithms, I don't know if this will bind in considerations of other changes in the ecosystem. In the past, there have been discussion on the mailing list if our RBF algorithm was the most income-maximising in face of different transactions traffic. Namely, high-feerate evictated/replaced chunks of transactions could be kept in a buffer of transactions ordered by their last descendants outputs. When a new package comes in, in case of missing parents outputs, a lookup is realized in the buffer to see if the package attached to evicted/replaced chunks ancestor-feerate is above the cluster spending the same utxo or above the bottom mempool chunks.

While this type of broadcast pattern can seem odd today, as new chunks should have been broadcast by the same transaction issuer, and therefore the fee bidding monotonically increasing, this assumption might not be true with more deployment of multi-party applications or contracting protocols, where counterparties are competing for the spend of utxos with packages offering different absolutes fees/feerates (e.g Lightning commitment transactions). Further, you might have
new flows where the first component of the package is shared (e.g a dual-funding/splicing between N parties) though where each party attaches a different CPFP due to asymmetries in liquidity preferences.

Based on cluster pattern analysis, future mining algorithms might attach a prediction weight to each chunk part of the
evicted/replaced chunks and decide which to keep in the space-limited buffer in case of eviction needed there too. Note,
while the issue can be alleviated by bumping the size of both your mempool and hypothetical future replacement buffers
(i.e -maxmempool), there will still be a need to efficiently traverse them, as you need to provide one block template to your mining devices due to the probabilistic nature of mining.

So do we have a separation of concerns between the proposed total ordering and a future hypothetical replacement cache
optimizing block template construction, or turning the question differently is the proposed total ordering approach going
to work well for mining algorithms optimizations buffering replaced transactions ?

There is a second line of thinking about this proposal, namely the workload in term of CPU cycles and what is the worst-case
workload we're designing for in term of incoming transaction-relay traffic. I think the main risk we're aiming to hedge against
is huge payload of transactions being injected on a miner mempool forcing computationally-expensive replacement or eviction traversals on the clusters and therefore delaying the construction of a block template, a latency sensitive operation in the mining arm race.

I think we're certaintly bounded on the transaction-relay download limits here, i.e MAX_PEER_TX_ANNOUNCEMENTS,
MAX_PEER_TX_REQUEST_IN_FLIGHT and the default number of transaction-relay peers (though we can probably reason by reduction to a single peer for the purpose of DoS analysis) and should we adopt them for the worst-case perfomance
analysis of all the mempool operations ?

Beyond transaction-relay performance heavy events to consider, I think we have to encompass block-relay related events such as shallow reorgs.

@sdaftuar
Copy link
Member Author

sdaftuar commented Jun 1, 2023

Namely, high-feerate evictated/replaced chunks of transactions could be kept in a buffer of transactions ordered by their last descendants outputs. When a new package comes in, in case of missing parents outputs, a lookup is realized in the buffer to see if the package attached to evicted/replaced chunks ancestor-feerate is above the cluster spending the same utxo or above the bottom mempool chunks.

[snip additional comments related to this]

If I'm understanding your post correctly, you're asking whether the mempool design I'm proposing is consistent with having some sort of external-to-the-mempool cache of transactions? That really sounds like a p2p optimization and not a mempool issue, and I think there are a lot of questions you'd have to answer in designing such a cache, including how to make it DoS resistant so that an adversary can't overrun it, but fundamentally I don't think anything I'm proposing would make designing such a thing any harder than our current mempool design does. Really I think what you're describing is orthogonal to how the mempool operates.

Overall the new mempool design should make it easier to reason about RBF, so that in situations where an RBF attempt is competing with a CPFP attempt we'll do the right thing more often. Moreover I think a total mempool ordering will allow for future new policy rules that could make this even better in the future, as (unlike today) we might be able to consider rules in that would involve where a transaction sorts in the global mempool as a condition for transaction acceptance -- something which is computationally impractical to use in today's mempool design. (Such a rule has been proposed in the past as a way to potentially eliminate RBF-pinning while still permitting legitimate CPFP attempts.)

There is a second line of thinking about this proposal, namely the workload in term of CPU cycles and what is the worst-case workload we're designing for in term of incoming transaction-relay traffic.

Yes, I'm definitely concerned with CPU usage required to sort a single cluster, and coming up with policy limits that ensure this is a fast-enough operation. I don't think the concern will be with transaction acceptance or block construction, which should be very fast in this design, but instead the concern will be around how expensive it is to update the mempool after a block is found (as that can cause thousands of clusters to be re-sorted, in the worst case). However, as our primary goal when a block is found is to update the mempool in such a way that a new block template can be generated as quickly as possible, I think there are optimizations we can make that take advantage of this new design, even before we've finished fully updating the mempool and re-sorting all the affected clusters. (One possible idea: we might pre-compute 2 or 3 blocks worth of transactions, instead of just 1, when getblocktemplate is called. Then, when a block is found, we only need to remove conflicts and confirmed transactions from our precomputed blocks in order to be able to return a reasonably good block template while the mempool finishes re-sorting.)

I think similar optimizations are possible after a re-org; in theory we could defer fully re-sorting clusters when transactions are added back from disconnected blocks and instead do a simple topological sort or merge-sort of newly created clusters in order to optimize for producing a valid block as quickly as possible. Note also that the current mempool is already pretty slow to update during a reorg, due to the caching of ancestor and descendant state, which I believe we'll be able to completely remove in the new design (resulting in improvements to CPU usage, memory, and -- perhaps best of all -- code simplification!).

@ariard
Copy link
Contributor

ariard commented Jun 14, 2023

If I'm understanding your post correctly, you're asking whether the mempool design I'm proposing is consistent with having some sort of external-to-the-mempool cache of transactions? That really sounds like a p2p optimization and not a mempool issue, and I think there are a lot of questions you'd have to answer in designing such a cache, including how to make it DoS resistant so that an adversary can't overrun it, but fundamentally I don't think anything I'm proposing would make designing such a thing any harder than our current mempool design does. Really I think what you're describing is orthogonal to how the mempool operates.

Yes the idea is to have some external-to-the-mempool cache of transactions to a) connect high-fee child to an evicted/replaced parent and b) resurrect this chain of transaction in the mempool if it constitutes a better ancestor feerate package than the current chain of transaction spending a UTXO/TXO. In light of collaborative transactions constructions between LN nodes, where participants can either RBF or CPFP in an asynchronous fashion, I think this propagation pattern will become more likely. An optimizing miner will seek to maintain such external-to-the-mempool caches to build the highest income-return block template.

I agree that such cache design will have to be DoS resistant (in the same way than orphans buffer has to be) though I maintain it's a mempool issue as for each high-fee child discovered, you'll have to traverse some mempool subgraphs to evaluate if it's a better candidate to "resurrect" in the mempool. This ressurection could trigger the generation of a new block template to downstream miners if the template feerate is more compelling. As long as the performance is roughly the same for all historic class of workload there shouldn't be detrimental impacts on miner operations, discussions in #27854 and related to Stratum V2 can be informative in that sense.

Overall the new mempool design should make it easier to reason about RBF, so that in situations where an RBF attempt is competing with a CPFP attempt we'll do the right thing more often. Moreover I think a total mempool ordering will allow for future new policy rules that could make this even better in the future, as (unlike today) we might be able to consider rules in that would involve where a transaction sorts in the global mempool as a condition for transaction acceptance -- something which is computationally impractical to use in today's mempool design. (Such a rule has been proposed in the past as a way to potentially eliminate RBF-pinning while still permitting legitimate CPFP attempts.)

Yes if such total mempool ordering enables to give the transaction sort in the global mempool feerate groups this information could be yield by the fee-estimator directly (and consumed by L2 nodes), even beyond any broadcast/ replacement attempt. If such feerate groups evolution can be tracked over time this would be very beneficial for Lightning liquidity management engine (where the question is not "what is the feerate to include a transaction now" rather "what will be the feerate group for confirmation X blocks from now with indices of prediction accuracy" therefore opening/closing channels efficiently). If this direction is followed, I think this is okay if the RBF rules become more a "black box" if we can have the "chunk" feerate yield by the mempool interface if we give it the non-finalized package we aim to broadcast ? This would simplify greatly Lightning transaction broadcast backends.

One possible idea: we might pre-compute 2 or 3 blocks worth of transactions, instead of just 1, when getblocktemplate is called. Then, when a block is found, we only need to remove conflicts and confirmed transactions from our preco> mputed blocks in order to be able to return a reasonably good block template while the mempool finishes re-sorting.)

Yes I think this optimization is in line with upgrades brought by Stratum V2, where speculated non-empty blocks can be send in advances to client (NewMiningJob with min_ntime unset). There is still the implications on how good pre-computation will be in face of "super" high fee transactions provoking a re-computation of the 2 or 3 cached blocks worth of transactions. The DoS nightmare would be some combination of transactions acceptance and re-orgs knocking out the topological re-sorting for a while that can be cheaply triggered by an adversary. Generally, I think in the future cluster computation will be not only a critical-path for block template construction though also for Lightning nodes liquidity allocation, where you might evaluate your ~1000 opened channels and pending broadcasts against pending mempool clusters, and decide if you force-close channels/merge broadcast packages.


Looking forward when there is a draft implementation ready to look on. This would make the design proposal review more concrete, especially to evaluate the potential impacts on Lightning/L2 transaction backends!

@bitcoin bitcoin deleted a comment from Wwwwwwuuuuu Jan 30, 2024
achow101 added a commit that referenced this issue Feb 10, 2024
29029df [doc] v3 signaling in mempool-replacements.md (glozow)
e643ea7 [fuzz] v3 transactions and sigop-adjusted vsize (glozow)
1fd16b5 [functional test] v3 transaction submission (glozow)
27c8786 test framework: Add and use option for tx-version in MiniWallet methods (MarcoFalke)
9a1fea5 [policy/validation] allow v3 transactions with certain restrictions (glozow)
eb8d5a2 [policy] add v3 policy rules (glozow)
9a29d47 [rpc] return full string for package_msg and package-error (glozow)
158623b [refactor] change Workspace::m_conflicts and adjacent funcs/structs to use Txid (glozow)

Pull request description:

  See #27463 for overall package relay tracking.

  Delving Bitcoin discussion thread: https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning/340
  Delving Bitcoin discussion for LN usage: https://delvingbitcoin.org/t/lightning-transactions-with-v3-and-ephemeral-anchors/418

  Rationale:
  - There are various pinning problems with RBF and our general ancestor/descendant limits. These policies help mitigate many pinning attacks and make package RBF feasible (see #28984 which implements package RBF on top of this). I would focus the most here on Rule 3 pinning. [1][2]
  - Switching to a cluster-based mempool (see #27677 and #28676) requires the removal of CPFP carve out, which applications depend on. V3 + package RBF + ephemeral anchors + 1-parent-1-child package relay provides an intermediate solution.

  V3 policy is for "Priority Transactions." [3][4] It allows users to opt in to more restrictive topological limits for shared transactions, in exchange for the more robust fee-bumping abilities that offers. Even though we don't have cluster limits, we are able to treat these transactions as having as having a maximum cluster size of 2.

  Immediate benefits:

  - You can presign a transaction with 0 fees (not just 1sat/vB!) and add a fee-bump later.
  - Rule 3 pinning is reduced by a significant amount, since the attacker can only attach a maximum of 1000vB to your shared transaction.

  This also enables some other cool things (again see #27463 for overall roadmap):
  - Ephemeral Anchors
  - Package RBF for these 1-parent-1-child packages. That means e.g. a commitment tx + child can replace another commitment tx using the child's fees.
  - We can transition to a "single anchor" universe without worrying about package limit pinning. So current users of CPFP carve out would have something else to use.
  - We can switch to a cluster-based mempool [5] (#27677 #28676), which removes CPFP carve out [6].

  [1]: Original mailing list post and discussion about RBF pinning problems https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff, https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html
  [2]: A FAQ is "we need this for cluster mempool, but is this still necessary afterwards?" There are some pinning issues that are fixed here and not fully fixed in cluster mempool, so we will still want this or something similar afterward.
  [3]: Mailing list post for v3 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
  [4]: Original PR #25038 also contains a lot of the discussion
  [5]: https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393/7
  [6]: https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393#the-cpfp-carveout-rule-can-no-longer-be-supported-12

ACKs for top commit:
  sdaftuar:
    ACK 29029df
  achow101:
    ACK 29029df
  instagibbs:
    ACK 29029df modulo that

Tree-SHA512: 9664b078890cfdca2a146439f8835c9d9ab483f43b30af8c7cd6962f09aa557fb1ce7689d5e130a2ec142235dbc8f21213881baa75241c5881660f9008d68450
@bitcoin bitcoin deleted a comment from Lastat22 Apr 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants