Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mempool: Add option to bypass contextual timelocks in testmempoolaccept #25434

Closed
wants to merge 12 commits into from

Conversation

w0xlt
Copy link
Contributor

@w0xlt w0xlt commented Jun 21, 2022

Picking up #21413 (labeled "Up for grabs").

This PR adds a new option (bypass_timelocks) to RPC testmempoolaccept that skips CheckFinalTxAtTip and CheckSequenceLocksAtTip, i.e. transaction nLocktime and nSequence will not be evaluated according to the current block height / time .

Script checks are still done as-is, so if the transaction itself has an error, it will still fail.

@DrahtBot
Copy link
Contributor

DrahtBot commented Jun 21, 2022

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #25577 (mempool, refactor: add MemPoolBypass by w0xlt)
  • #25487 ([kernel 3b/n] Decouple {Dump,Load}Mempool from ArgsManager by dongcarl)
  • #24007 ([mempool] allow tx replacement by smaller witness by LarryRuane)
  • #23897 (refactor: Move calculation logic out from CheckSequenceLocksAtTip() by hebasto)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@glozow
Copy link
Member

glozow commented Jun 21, 2022

Thanks for picking this up. I just realized I probably should have left a comment summarizing what work needs to be done. Not sure if you saw #21413 (comment)? We'll want to add (1) an option to override relative timelocks between transactions passed in, (2) a param for what MTP/height to mock, either for the whole thing or at each transaction. This would also require more tests, i.e. utilizing the new options to ensure it works beyond not breaking things.

@w0xlt
Copy link
Contributor Author

w0xlt commented Jun 21, 2022

This greatly increases the scope.

It might be better to break down each option into a specific PR.

How could testmempoolaccept simulate a specific block height, for example?

Copy link
Contributor

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

If you're interested by the whole context, I think there are multiple contingent checks interesting to bypass to sanitize a L2 pre-signed transactions :

  • relative timelock (CheckFinalTxAtTip() L731 in validation.cpp)
  • absolute timelock (CheckFinalTxAtTip() L731 in validation.ccp)
  • feerate accuracy (CheckFeeRate() L858 in validation.cpp)
  • CSV execution (CheckSequence() L588 in interpreter.cpp)
  • CLTV execution (CheckLocktTime() L554 in interpreter.cpp)

Do we have more contingent checks interesting to bypass for a L2 ? The ones listed should be good for LN at least.

Instead of bypassing purely the timelocks checks, it could be interesting to allow passing MTP and height parameters, as suggested in #21413. That would enable to verify that your pre-signed transactions nLocktime and CLTV argument have been paired with correct values.

A further interesting feature would be to allow dummy signatures in the witness script. The intended use-case of such bypass options is to verify the consensus and policy validity of a transaction as pre-signed by your counterparty. However, to limit interference with signers policy it would be ideal for the local L2 node to not have to produce her signature during the verification flow.

Even further, on the LDK-side, we're targeting mostly mobile hosts, on which we might not be able to ship a Bitcoin Core node and access a RPC interface due to constraining OS application policies. It would be ideal to have a libstandardness in the spirit of the bitcoinconsensus library, exposing a high-level verify_transaction_standardness().

Anyway, good to go move step by step. Effectively it would be nice to break each option in its own PR. For passing a specific height, we could have testmempoolaccept(height=100).

{
{"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{FormatMoney(DEFAULT_MAX_RAW_TX_FEE_RATE.GetFeePerK())},
"Reject transactions whose fee rate is higher than the specified value, expressed in " + CURRENCY_UNIT + "/kvB\n"},
{"bypass_timelocks", RPCArg::Type::BOOL, RPCArg::Default{false}, "Don't enforce BIP68 sequence locks and timelocks. Don't use unless you know what you're doing!\n"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, I think this would benefit from something like standardness-sanitization in doc/policy explaining how to use the new API to an extended set of L2 developers with few use-cases.

src/validation.h Outdated
@@ -242,23 +242,26 @@ struct PackageMempoolAcceptResult
* It is also used to determine when the entry expires.
* @param[in] bypass_limits When true, don't enforce mempool fee and capacity limits.
* @param[in] test_accept When true, run validation checks but don't submit to mempool.
*
* @param[in] bypass_timelocks When true (test_accept must also be true), don't enforce timelock
* rules BIP65 and BIP112.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should say BIP68. The consensus-enforcement of relative timelock is described there, BIP112 only describes how to restrain script path execution in function of spent inputs maturity thanks to CSV. Following the same logic it should also say that the nLocktime are not enforced, without ref to BIP65.

@w0xlt
Copy link
Contributor Author

w0xlt commented Jun 28, 2022

@ariard If I understand correctly, are you suggesting the parameter schemes below?

{"options", RPCArg::Type::OBJ, RPCArg::Optional::OMITTED_NAMED_ARG, "",
    {
        {"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{...},"..."},
        {"bypass_relative_timelocks", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_absolute_timelocks", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_feerate_accuracy", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_csv_execution", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_cltv_execution", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
    },
     "\"options\""
},

or

{"options", RPCArg::Type::OBJ, RPCArg::Optional::OMITTED_NAMED_ARG, "",
    {
        {"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{...},"..."},
        {"mock_mtp", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"mock_height", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_feerate_accuracy", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_csv_execution", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
        {"bypass_cltv_execution", RPCArg::Type::BOOL, RPCArg::Default{false}, "..."},
    },
    "\"options\""
},

@ariard
Copy link
Contributor

ariard commented Jun 29, 2022

@w0xlt

Actually a combination of both. The first one + "mock_mtp / mock_height".
It sounds a lot of flexibility but I think that's what you're aiming for to verify Lightning HTLC-timeout.

w0xlt added 2 commits July 1, 2022 15:48
`maxfeerate` becomes a member of an "options" object rather than
a positional argument.

The idea is that any new parameters in the future will also go into
options.
@luke-jr
Copy link
Member

luke-jr commented Jul 1, 2022

IMO the idea for mock_{mtp,height} is a better approach. "Valid in 9999 years" seems closer to "never valid" than "valid eventually".

@w0xlt w0xlt force-pushed the bypass-timelocks branch 2 times, most recently from 7ebaffb to 81d4bf7 Compare July 2, 2022 15:28
@w0xlt
Copy link
Contributor Author

w0xlt commented Jul 2, 2022

I pushed some changes:

. struct MempoolTestCriteria was created to allocate all parameters suggested in #25434 (review). (be52f8a)

. bypass_timelocks has been split into bypass_relative_timelocks and bypass_absolute_timelocks, as suggested in the same comment. (fccce1e)

. The tests were split in two commit: one tests that bypass_timelocls works and the other tests that it doesn't affect OP_CSV and OP_CLTV checks. (d8e4a09 and 52a2e28).

@w0xlt
Copy link
Contributor Author

w0xlt commented Jul 3, 2022

#25532 adds the bypass_feerate_accuracy option,

I prefer to keep the different option (or group of options) in different PRs for ease of review.

Copy link
Contributor

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds to move in the good direction. I think it could be good to have a "how-to" in doc/policy. I can write it if you would like.

I think it could be good to get one more ACK on the API by someone likely interested by such testmempoolaccept options cc @darosior @instagibbs

{"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{FormatMoney(DEFAULT_MAX_RAW_TX_FEE_RATE.GetFeePerK())},
"Reject transactions whose fee rate is higher than the specified value, expressed in " + CURRENCY_UNIT + "/kvB\n"},
{"bypass_absolute_timelocks", RPCArg::Type::BOOL, RPCArg::Default{false}, "Don't enforce nLocktime.\n"},
{"bypass_relative_timelocks", RPCArg::Type::BOOL, RPCArg::Default{false}, "Don't enforce BIP68 relative lock-time.\n"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you precise "Do not consensus-enforce BIP68" to make it clear it's a consensus check not policy one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 7ec6416

true, true);

if (options.exists("maxfeerate")) {
max_raw_tx_fee_rate = CFeeRate(AmountFromValue(options["maxfeerate"]));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you like ternary I think you can do ternary here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 7ec6416

* It represents the criteria that will be used to test the transactions
* without submitting them.
*/
const MempoolTestCriteria m_test_criteria;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the set of criterias, you might have multiple ones with the proposed API?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to const MemPoolBypass m_mempool_bypass in 26ee1a4

@@ -1041,6 +1054,9 @@ bool MemPoolAccept::Finalize(const ATMPArgs& args, Workspace& ws)
{
AssertLockHeld(cs_main);
AssertLockHeld(m_pool.cs);

assert(args.m_test_criteria.AllowFinalizeTransaction());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As it's a belt-and-suspender to detect we don't have non-fully consensus-enforced transactions slipping in the mempool, i would rather go for something more explicit EnsureFullyValidatedTransaction() or EnsureNoBypassApplied().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 12ac626

@@ -1404,16 +1420,21 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptPackage(const Package& package,
} // anon namespace

MempoolAcceptResult AcceptToMemoryPool(CChainState& active_chainstate, const CTransactionRef& tx,
int64_t accept_time, bool bypass_limits, bool test_accept)
int64_t accept_time, bool bypass_limits, bool test_accept,
const std::optional<MempoolTestCriteria>& test_criteria)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's not the good time to factor in a single new structure MemPoolCriteria encompassing bypass_limits, test_accept and test_criteria. The ctor could have an assert to ensure the client-provided bypass options such as timelocks are coming from testmempoolaccept.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in #25577.

test_accept and bypass_limits are used elsewhere, not just testmempoolaccept.
#25577 creates struct MemPoolBypass that encompasses both variables.

src/validation.h Outdated
@@ -230,6 +230,15 @@ struct PackageMempoolAcceptResult
: m_tx_results{ {wtxid, result} } {}
};

struct MempoolTestCriteria {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this structure could be documented. Though see comment above if it makes sense to factor in a single struct MemPoolBypass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 9e8703c

@w0xlt
Copy link
Contributor Author

w0xlt commented Jul 8, 2022

@ariard Thanks for the review. I will address your suggestions soon.

#25570 adds bypass_{csv,cltv} options. This way I think all the options in the list in #25434 (review) are implemented.

w0xlt and others added 6 commits July 9, 2022 20:39
This is for test_accepts only, and not allowed in an actual submission
to mempool - see assert statements.

Provide an option to bypass BIP68 nSequence and nLockTime checks. This
means clients can use testmempoolaccept to check whether L2 transaction
chains (which typically have timelock conditions) are valid without
submitting them. Note that BIP112 and BIP65 are still checked since they
are script (non-contextual) checks. This does not invalidate any
signature or script caching.

Co-authored-by: glozow <gloriajzhao@gmail.com>
Co-authored-by: glozow <gloriajzhao@gmail.com>
Test the bypass_timelock options in testmempoolaccept. This lets us
bypass BIP68 relative locktime checks (in nSequence) and absolute
locktime checks (in nLocktime).

Co-authored-by: glozow <gloriajzhao@gmail.com>
OP_CSV and OP_CLTV script checks are still done,
so setting bypass_timelocks=True doesn't mean that
bad scripts pass.

Co-authored-by: glozow <gloriajzhao@gmail.com>
@w0xlt
Copy link
Contributor Author

w0xlt commented Jul 10, 2022

@ariard I addressed most of your suggestions in a new PR (#25577).

That PR does not change behavior, it just refactors the use of test_accept and bypass_limits, allocating them in a new struct MemPoolBypass.

I rebased this PR on top of #25577.

@darosior
Copy link
Member

darosior commented Jul 11, 2022

I would like to see a stronger rationale for this change, as well as for the end goal of the path to conditionally bypass mempool checks. In short, i'm worried we are going to introduce a lot of complexity to this critical part of the codebase with only marginal -if any- benefits.

It seems like the motivation is helping to test applications which use offchain contracts. Any reasonable such application has a functional test framework (similar to ours) in which many sequences of events are tested in a blackbox manner. It is necessary to test the different spending paths of a Script, but does a lot more. Adding the option to bypass timelocks would only allow the most basic of those tests to use testmempoolaccept instead of generating a couple of blocks before using sendrawtransaction. Sure, that would reduce the runtime of those tests (by what, a few hundred milliseconds?). But i don't think it's a good tradeoff for us to make.
Furthermore, functional tests are for simulating real conditions and you necessary want to test the very same codepath that your application would use in prod. That is, using sendrawtransaction. We could assure them testmempoolaccept actually uses almost the very same codepath as sendrawtransaction (which is becoming less true as we add more complexity there), but i don't think it would be very convincing: if testing the exact codepath your application will trigger in prod is a node.generate(6) away, you'll just do it.
If the motivation for these PRs, and the direction things are taking, is "it'll help L2s apps to test better, maybe, eventually" i would suggest we take a step back and examine: how would such an application testing framework actually use it? For instance, here are two functional tests from two candidate projects i'm familiar with:

Both (implicitly or not) test the validity of transactions critical to the security of the protocol. I don't think testmempoolaccept with any form of bypass would help here. Neither would it for a lot other test.

Even assuming this feature might eventually be used. In #20833 we introduced testmempoolaccept, which could be used to quickly sanity check the "static" standardness bounds of a transaction. It was probably a good tradeoff. Now, we want to push this forward with allowing to also be able to test timelocked transactions. And by the same logic it was suggested to then allow for mocking the transaction with dummy values for Script. Where do we stop? We have a clear layer violation here. What is suggested with dummy signatures is precisely what we do internally with the DummySignatureChecker! Does it mean we should expose more of our internals via RPC? I don't think so, it would only result in cluttered code and interfaces and would not eliminate the need for any L2 application to have a functional test framework in which they create keys, spin up nodes, sign transactions and broadcast them in different sequences.

Therefore i think we need some kind of a plan for this bypass-everything direction, as well as some more motivation for going this way.

@ariard
Copy link
Contributor

ariard commented Jul 14, 2022

Motivation of this work is to provide tools enabling L2 applications to verify that issued transactions (not only pre-signed ones) can broadcast well on the p2p network, at least on the majority of nodes running the default Core policy. As a reminder, for this type of application, confirming transactions in due time is critical for the security of funds, a point I believe none of us deny.

For a typical Lightning implementation, this "standardness defect" risk can manifest itself at reception of the counterparty signatures for commitment / HTLC transactions. For the latest one, the nLocktime is set and the feerate is 0 (post option_anchors_zero_fee_htlc_tx). Therefore those transactions, of which the broadcast time is unknown, would be rejected by testmempoolaccept due to to non-compliance with contingent policy checks like timelocks or feerates.

As of today, if you would like to currently assert the standardness of those transactions (to eliminate vulnerabilities like CVE-2020-26895 to slip in), I think the solution is to rewrite by yourself all the policy checks in your implementation library (e.g in rust-bitcoin, we have a dust threshold) check. However, there is no guarantee that those checks are re-implemented correctly or even that there are exhaustive compared to Bitcoin Core mempool policy ones.

That testmempoolaccept code path diverges from sendrawtransaction is an unfortunate fact that iirc I raised during the reviews of the mempool package accept PRs. As sketched out above, I think the ideal interface would rather be alibstandardness with bypass options allowing direct access to AcceptToMemoryPool() not only to make the verification codepath as similar to the broadcast codepath, but also as an architecture convinency for mobile clients and a performance improvement for LN routing hops (relying on RPC call adds latency in HTLC relay due to process context-switch).

I don't think the advocated layer violation matters. In system engineering, you'll always find layer violation, e.g using ASM snippets in your high-level language source code to access CPU registers and boost your computation performance likewise with mathematic libraries (even if 99,99% of the userspace applications won't do that). So I think the question is more about if the layer violation is justified in the present case. About the scope, I think we should allow to check for as much standardness bounds that it makes sense for the application semantic.

That said, I think we should put apart the question of what should be the ideal interface to verify the standardness bounds (i.e RPC call, new shared library, new Capn' Proto interface, etc). In my understanding, I think the question where we diverge is if every L2 implementation should rewrite the Core standardness logic to verify its transaction validity, as much as you can in function of the application semantics, or rather we should expose this standardness logic in a straightforward way.

I think the first approach is not only duplicating the engineering cost for any L2 implementation, instead to factor in Core but also far more error-prone.

Copy link
Member

@luke-jr luke-jr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned on #25532, simply ignoring a rejection should take the approach used in #7533 and #20753.

Special-casing time locks only makes sense if it supports keeping the check but mocking a specific future/past time/height.

@DrahtBot
Copy link
Contributor

🐙 This pull request conflicts with the target branch and needs rebase.

Want to unsubscribe from rebase notifications on this pull request? Just convert this pull request to a "draft".

@glozow
Copy link
Member

glozow commented Aug 1, 2022

Based on #25434 (comment) and the fact that #21413 was in conceptual review stage, marking this as "needs conceptual review" until there is stronger support for this change.

@ariard
Copy link
Contributor

ariard commented Oct 4, 2022

Seeing #25532 closed, still renewing my interest to have an exportable Core "compliant" policy rules checker for second-layers toolchains. From an offline discussion with @darosior I think we observed there is a scale of severity difference between two class of L2 time-sensitive protocols:

  • "static" transactions like Revault, where no participant is able to propose modifications to the transactions input/output/size/scriptpubkeys during the protocol execution
  • "malleable" transactions like Lightning where counterparties are proposing modifications to the transaction input/output/size/scriptpubkey (e.g update_add_htlc to add a HTLC output on both local/remote commitment transactions)

From my comprehension, the second set of protocols is expected to increase in scope with the deployment of the interactive transaction construction protocol in the LN space, where a set of N participants might contribute to a transaction.

Looking forward to tackle this project by myself if there is no more interest by the current PR author after done with full-rbf, or at least introduce the subject at IRC meetings to collect more people thoughts on the design space and worthiness of such new API.

@glozow
Copy link
Member

glozow commented Oct 4, 2022

Looking forward to tackle this project by myself if there is no more interest by the current PR author after done with full-rbf, or at least introduce the subject at IRC meetings to collect more people thoughts on the design space and worthiness of such new API.

@ariard thanks for the proactivity, I agree the concept should be discussed with more people (hence "needs conceptual review") - pinging @w0xlt to see if there is a plan to do something like this?

Btw just to clarify, #25532 was closed because it was blocked by this PR and this PR is currently also blocked. I wasn't rejecting the concept, just attempting to focus review attention more to make progress.

@w0xlt
Copy link
Contributor Author

w0xlt commented Oct 12, 2022

@glozow I'm not sure I understand your question correctly, but I'm available to discuss this PR (or any other changes).

@glozow
Copy link
Member

glozow commented Oct 13, 2022

@glozow I'm not sure I understand your question correctly, but I'm available to discuss this PR (or any other changes).

From my perspective, this isn't / #21413 wasn't blocked based on code getting written, but because there isn't strong consensus that it's a good idea. I'm wondering what the plan is to get more conceptual review, e.g. through soliciting opinions from specific people or bringing it up in a meeting/mailing list. My question was whether you plan on doing the advocacy work, as it seems @ariard is offering to do so. Hope this clarifies what I meant.

@ariard
Copy link
Contributor

ariard commented Oct 23, 2022

I'm wondering what the plan is to get more conceptual review, e.g. through soliciting opinions from specific people or bringing it up in a meeting/mailing list.

Still thinking it would be nice to explain all the standardness malleability issues raised by policy for second-layers in a post, though ideally with PoC code. In the light of the recent LND bug due to old policy check in their dependency, I think well-engineered API/RPC calls would minimize the odds of such critical bugs in the future (orthogonal of how a watchtower infrastructure or better error-handling could have prevented this specific LND bug).

Happy to extend on the subject soon, though from my understanding the pipeline of "mempool-as-L2-interface" is quite busy those days between full-rbf, v3 policy, package RBF, ephemeral anchor and else. If someone would like to needle forward on that please do so (though a lot of conceptual things here).

@DrahtBot
Copy link
Contributor

There hasn't been much activity lately and the patch still needs rebase. What is the status here?

  • Is it still relevant? ➡️ Please solve the conflicts to make it ready for review and to ensure the CI passes.
  • Is it no longer relevant? ➡️ Please close.
  • Did the author lose interest or time to work on this? ➡️ Please close it and mark it 'Up for grabs' with the label, so that it can be picked up in the future.

@fanquake
Copy link
Member

fanquake commented Feb 6, 2023

Closing for now. Not clear what the status of this is. Has needed rebase for > 6 months and had been rebased on top of a PR that is now closed.

@fanquake fanquake closed this Feb 6, 2023
@bitcoin bitcoin locked and limited conversation to collaborators Feb 6, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants