-
Notifications
You must be signed in to change notification settings - Fork 37.1k
Covenant tools softfork #28550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Covenant tools softfork #28550
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
a3418a7
to
62794db
Compare
If we can have an end-to-end proof-of-concept implementation of each use-case brought as a justification to the proposed soft-forked opcodes. Otherwise it’s quite impossible to provide a sound technical review of the primitives robustness and trade-offs and state what they enable exactly. And it sounds we’re good to repeat the loop of the last 3 or 4 years of covenants discussions. As a reminder, just to take the last example of channel factories, most of the folks who have done real research on the subject still disagree on the security model and fundamental trade-off of the proposed design of channel factories. As one of my technical peer challenged on the mailing list few months ago: "So I think that means that part of the "evaluation phase" should involve I’m fully sharing this opinion. |
Having some code that works in proof-of-concept mode is not proof that it works in the real world at all. A theoretical description of a use case that contemplates all scenarios is worth a million times more. Also it's a weird inverted criteria to say that a proposal that enables 20+ use cases is considered worse because it only has ~5 actually implemented (because that's only 25%) while some a proposal that has (say) 1 use case is considered better just because it has that single use case implemented (100%). It should be the opposite. By that rationale every new use case someone comes up with for a proposal now counts negatively against that proposal? |
I think it would be worth moving most conceptual discussion to the Delving Bitcoin thread to avoid blasting the already-burdened Github notifications of contributors here. |
62794db
to
86454de
Compare
No, politely I think your statement is a non-sense. There is no common criteria which is accepted among the bitcoin developers, neither the community on what constitute a valid “theoretical description”. For some a sufficient description will be a mathematical formalization of the game-theory of the use-case (e.g the chapter 11 “calculations of the original bitcoin paper on miner incentives). For others a proof-of-concept of code is the “theoretical” description in itself (e.g bitcoin core libbitcoinkernel is a definition of consensus rules). For few more people a theoretical description won’t be complete without a security proof as understood under some assumption (e.g the DL assumption) or models. E.g Taproot has a security proof: https://github.com/apoelstra/taproot In fact I think they’re all “valid” description and completing each other, i.e describing more accurately a use-case. Proof-of-concepts, experiments and formalized or logical description have hundreds of years of successful tracks records in the field of civil, mechanical and software engineering. As a reminder, Bitcoin is a $500B ecosystem relied on as a critical infrastructure in the daily life of people in emerging countries or war zones. As a technical community, if we have a sincere wish to see this system survives on decades-long perspectives, and being kept relied on, we should bind to the highest engineering standards, or at the least do not downgrade on development standards which have been setup in the past, e.g with the taproot design, review & implementation process. I’m still stunned when I see some parts of the community and even experienced developers falling back to shamanism, twitter pow-wow and seed-startup-pitch-deck as a design process in matters of advances in consensus changes.
My thanks if you can propose a demonstration that proposal XYZ is enabling the said-so 20+ use cases, without scalability bottleneck or cheap to exploit security issue. As a reminder, the original designer or team of designers of Bitcoin introduces the infamous OP_VER opcode in the early versions of the client, without understanding it could provoke consensus partitioning between network agents. I believe we should stay very humble that we understand Bitcoin or Lightning that good. Any use-case can introduce a coupling between layers and this is quite documented by the IETF (see RFC 3439). As a matter of personal experience, one of the reason to disregard the original stakes certificates (cf. 2020 lightning mail list was the concern of introducing unpredictable and spurious network mempools congestion spikes, as a solution to channel jamming. There is no free lunch. Personally, I’m fine if we don’t have covenants soft-forks during the next 10 year, despite the personal interest in numerous use-cases brought or enhanced by covenant primitives. It’s not like we’re lacking of heavy changes to harden Bitcoin, lower the computational costs for full-nodes or make it more usable to the end-user. All Bitcoin needs to do to succeed on the long term is just to survive and it’s already ambitious. No more. Consensus fundamentals are okay. |
Answering here as I’ll figure out later how to do the connect.
Vaults can be done today with pre-signed transactions, even if the trade-offs are different it’s a practical construction. I still have the huge concern than “process fatigue” and operational complexity of vaults are too high, even to be swallowed by professional self-custody teams with $1B+ under management. If the wish is to improve self-custody of the Bitcoin end-users in the near-term, highest yielding fruit sounds things like n-of-m frost and taproot timelocked backup branches. The later already introducing difficulty of fee-bumping management for wallets.
I think the state of the discussion about lightning eltoo was this one https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-December/003788.html. I don’t know if a clear comparative analysis of all the eltoo designs (ln-symmetry, daric, original eltoo) has been done, especially on the question of watchtower.
False, this is space-time trade-off as you have know to bear extended witness space cost as a routing node (taproot logarithm perf), if the DLC goes on-chain, and as such the level of fee-bumping you have to maintain. If we care about as a community about covenants and advanced contracting protocols, I really think we should put on the table the great consensus cleanup: #15482 and clean current consensus tech debt. You don’t wanna miners launch timewarp attacks to pwn your $$$ vaults in a post-subsidy world. I won’t champion it in the near future, though more willing spend review time on a great consensus cleanup softfork. |
I still maintain than softforks reducing the amount of "systemic risk"-level technical debt of the Bitcoin ecosystem should be given priority over covenant-enabling softforks, especially when there is plausible or proven interdenpendency between the said technical debt issue and Bitcoin second-layers. Timewarp attacks have been known affecting cryptocurrencies ecosystem since at least 2011. By manipulating the timestamps of the first and last block of a 2016 difficulty period, miners can manipulate the difficulty adjustment of blocks and increase the block issuance frequency. This frequency increase could be use to shorten the "effective duration" of the height-based timelocks of vaults and collaborative custody wallets from the "expected duration" foreseen by users or their watchtowers to react on-time. While timewarp attacks requires miners coordination and are publicly observable, there is a known proposal in the ecosystem ("forward blocks”) to actually use this bug to increase miners income. In a post-subsidy world, miners could have incentives to exploit vulnerable vaults and wallets. There is another publicly known systemic risk that requires a softfork to be fixed in a world with limited blockspace, the "thundering herd" or "forced expiration spam". Originally pointed out in the Lightning white paper (section 9.2), mass expiration spams happens when many off-chain time-sensitive transactions are broadcast to confirm soon, and there is no enough block size capacity throughput available before a subset of the timelocks expire. In today Lightning Network, this would happen if the total sum in weight of pending HTLC-timeout (assuming same nLocktime value N) for all channels is superior to the blockspace available until height N is reached. This "thundering herd” issue has been discussed more recently in the context of channel factories. Miners attacks of Bitcoin second-layers and their incentives to do so has been an area of research for years for Gleb and myself (see "On massive channel closing and fee bumping" and "Costless bribes against time-sensitive protocols"). While an answer has been sketch out on the unsuitability of using pre-signed transactions for vault, it's hard to assert if proposed design vaults in themselves will be practical enough from a "process fatigue” viewpoint (when you add the complexity of witness backup management, hosts and watchtower configuration, fee-bumping reserves provisioning or key initialization ceremony). "Process fatigue" is a clear design standards in the field of secure system design. Fairly enough it's hard to evaluate "process fatigue" on a whiteboard. All, I can do to demonstrate this point, if James has a robust enough proof-of-concept implementation is to find time during the next 18 months to do a full execution of his vault design under real-world conditions and come with a public write-up at the image of what Peter has historically done for the Zcash setup ceremony. No paid for full independence, open to do it in a pure hacker ethos as I'll learn a lot. I'll let James speaks if he's done for the demonstration ? Already reviewed his vault paper and from LN infrastructure experience, I've a good idea of the operational constraints. Zooming out, I still deeply believe we're stuck on the covenant discussions somehow because we have not taken time to build and nurture the communication channels, tooling and space where to discusss and build consensus in matters of consensus change. There has been attempts with the contracting primitives WG on IRC or the bitcoin-inquisition fork though at that stage it's more finding reliable people willing to do the maintenance. From experience this is hard to permanently switch between the hats of maintainer and deep technical reviewer and ensure substantial neutrality is observed. I'm certain if we can find reliable people who are ready to commit themselves on animating those community-wide effort (irc meetings, bitcoin inquisition review or writing historical content for archiving purpose), Opensats or another will be open to assign some long-term support grants (opensats funded CoreDev of last month thanks to them). Another alternative, I'm down to hold one of the key of a n-of-m bitcoin wallet to fund future consensus changes effort, if other reliable and trustworthy folks can be found. Taproot and Schnorr set a great engineering standard in matters of consensus development process due to a number of different initiatives, e.g the taproot open review during few weeks end of 2019. I think the meatspace optech workshops played a good role too to assert industry and community appetence for the changes. Somehow there has been a break in this organic "tradition" with the pandemie, and I believe we're still paying the price of it as a community today. If anyone has more concrete and constructive suggestions on rolling the ball forward on covenants consensus changes and second-layers and actually building consensus with proof-of-work in a responsible fashion, my thanks to express a respectful and polite opinion. PS: I can say more though it's Saturday evening in my timezone and then I'll be late to go to enjoy live music, PPS: Pardon my rough english if any confusion, I’m not native. |
86454de
to
4a481a1
Compare
… was conserved This fuzz targets copied the SignatureHashSchnorr function for Bitcoin Core 23.0 and checks the output of the APO-ready SignatureHashSchnorr from this branch against it. This is to make sure the behaviour of the function was not changed for non ANYPREVOUT keys, which would make some previously valid signatures invalid and, even worse, some previously invalid signatures valid. Co-authored-by: James O'Beirne <github@au92.org>
Implements a framework for accumulated "deferred checks" during vin script execution, and then executing the checks after all inputs have been validated individually. This facilitates new forms of validation which involve gathering information during vin script execution and then using that information to do "aggregate" checks on the transaction as a whole. For vaults in particular, this allows us to conjoin operations across vaults with incompatible parameters into the same transaction, as well as include unrelated inputs and outputs, which facilitates more flexible fee management. There are also applications re: batch validation and cross-input signature aggregation.
OP_VAULT adds the ability to encumber coins in such a way that they must pass through a delay period before being successfully spent, aside from a predetermined path (the "recovery" path) which they can be swept to at any time during the life of the vault. Thanks to feedback from - John Moffett: explicating recursive eval attack and providing a test case. Co-authored-by: Greg Sanders <gsanders87@gmail.com> Co-authored-by: Anthony Towns <aj@erisian.com.au> Co-authored-by: Sanket Kanjalkar <sanket1729@gmail.com>
Add a benchmark to help determine how to cost OP_VAULT operations.
9c9fd93
to
5e59c07
Compare
Conceptual discussion here: https://delvingbitcoin.org/t/covenant-tools-softfork/98
This draft is a patch that activates the consensus changes outlined in
These changes make possible a number of use-cases that are broadly beneficial to users of Bitcoin, including
We also see that many speculative scaling solutions (e.g. Ark, Spacechains) require locking coins to be spent to a particular set of outputs without any additional authorization (i.e. CTV, or APO’s emulation of it).
Scope of discussion
To prevent this thread from becoming overrun, please keep high-level, conceptual discussion to the related Delving Bitcoin thread. At this point in time, code nits probably aren't going to be useful either.
This draft has been posted to provide a tangible example of a softfork we might pursue, and a representation of the necessary code changes.
Activation method and parameters
Specific activation parameters have not been specified here (or are marked FIXME) to avoid putting the consensus cart before the horse. The activation method here has defaulted to the one used for BIP-0341 and might change pending discussion.