-
Notifications
You must be signed in to change notification settings - Fork 622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider dropping non_exhaustive
on Network
, etc
#2225
Comments
Specifically with For other types, I tend to agree with you. I think we should leave |
`rust-bitcoin` 0.30 added `#[non_exhaustive]` to the `Network` enum, allowing them to "add support" for a new network type without a major version change in the future. When upgrading, we added a simple `unreachable` for the general match arm, which would break in a minor version change of `rust-bitcoin`. While it seems [possible rust-bitcoin will change this](rust-bitcoin/rust-bitcoin#2225), we still shouldn't ba panicking, which we drop here in favor of a `debug_assert`ion, and a default value.
As a side note, I want to add that the addition of FWIW, as it's generally a bit of a nuisance that dependent crates currently need to create incompatible |
Recently we added the `non_exhaustive` attribute to `Network` in an attempt to make the type forward compatible. This has proven to be an annoyance to downstream users because there is no obvious recourse for the unknown variant and matching on `Network` is common. Note that if/when a new network is added by Core it is a big deal so we can and should bump our version when we add support for it. Remove `#[non_exhaustive]` from the `Network` type. Fix: #rust-bitcoin#2225
Recently we added the `non_exhaustive` attribute to `Network` in an attempt to make the type forward compatible. This has proven to be an annoyance to downstream users because there is no obvious recourse for the unknown variant and matching on `Network` is common. Note that if/when a new network is added by Core it is a big deal so we can and should bump our version when we add support for it. Remove `#[non_exhaustive]` from the `Network` type. Fix: rust-bitcoin#2225
Agreed on removal where it makes sense. I'm not a huge fan of |
Clean up typos and unused variables/imports Refactor check_closed_event for multiple events The check_closed_event function verified closure events against multiple counterparty nodes, but only a single closure reason and channel capacity. This commit introduces a check_closed_events function to verify events against descriptions of each expected event, and refactors check_closed_event in function of check_closed_events. Construct ShutdownResult as a struct in Channel This refactors ShutdownResult as follows: - Makes ShutdownResult a struct instead of a tuple to represent individual results that need to be handled. This recently also includes funding batch closure propagations. - Makes Channel solely responsible for constructing ShutdownResult as it should own all channel-specific logic. Add is_expired_no_std to Offer & Refund This was available for OfferContents but not an Offer so dependent projects could not access it. Remove channel monitor sync in progress log This log is super spammy for us and isn't very useful. next_hop_pubkey secp Verification only Update docs on `MonitorEvent::HolderForceClosed` In a96e2fe we renamed `MonitorEvent::CommitmentTxConfirmed` to `HolderForceClosed` to better document what actually happened. However, we failed to update the documentation on the type, which we do here. Pointed out by @yellowred. changing branch commits Derived debug solving fuzz test problem solved fuzz problem Wrap long onion_message fuzz strings Some editors like vim slow to a crawl when scrolling over long strings when syntax highlighting is turned on. Limit the length in fuzz strings to avoid this. Re-add one-hop onion message fuzzing test Revert fuzz test removal in 6dc4223. The test originally checked that OnionMessenger would fail for one-hop blinded paths. The commit added support for such paths, but changing the checks was not sufficient since the node was not connected to the introduction node of the reply path. This is required in order to work with the trivial TestMessageRouter. Fix this by explicitly connecting the nodes. Move bech32 parsing tests to the parse module Additional BOLT 12 tests specific to Offer were added, which will live in the offer module. Thus, it makes sense to move the bech32 tests to the parse module. Separate and describe BOLT 12 test vectors BOLT 12 test vectors for offer parsing One discrepancy from the spec still needs to be resolved: https://github.com/lightning/bolts/pull/798/files#r1334851959 add preflight probes test coverage Handling for sign_counterparty_commitment failing during normal op If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending later, rather than force-closing the channel (which probably won't even work if the signer is missing). Here we add initial handling of sign_counterparty_commitment failing during normal channel operation, setting a new flag in `ChannelContext` which indicates we should retry sending the commitment update later. We don't yet add any ability to do that retry. Handle sign_counterparty_commitment failing during outb funding If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending which required the signature later, rather than force-closing the channel (which probably won't even work if the signer is missing). Here we add initial handling of sign_counterparty_commitment failing during outbound channel funding, setting a new flag in `ChannelContext` which indicates we should retry sending the `funding_created` later. We don't yet add any ability to do that retry. Handle sign_counterparty_commitment failing during inb funding If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending which required the signature later, rather than force-closing the channel (which probably won't even work if the signer is missing). Here we add initial handling of sign_counterparty_commitment failing during inbound channel funding, setting a flag in `ChannelContext` which indicates we should retry sending the `funding_signed` later. We don't yet add any ability to do that retry. Handle retrying sign_counterparty_commitment failures If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending which required the signature later, rather than force-closing the channel (which probably won't even work if the signer is missing). This commit adds initial retrying of failures, specifically regenerating commitment updates, attempting to re-sign the `CommitmentSigned` message, and sending it to our peers if we succed. Handle retrying sign_counterparty_commitment outb funding failures If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending which required the signature later, rather than force-closing the channel (which probably won't even work if the signer is missing). This commit adds retrying of outbound funding_created signing failures, regenerating the `FundingCreated` message, attempting to re-sign, and sending it to our peers if we succeed. Handle retrying sign_counterparty_commitment inb funding failures If sign_counterparty_commitment fails (i.e. because the signer is temporarily disconnected), this really indicates that we should retry the message sending which required the signature later, rather than force-closing the channel (which probably won't even work if the signer is missing). This commit adds retrying of inbound funding_created signing failures, regenerating the `FundingSigned` message, attempting to re-sign, and sending it to our peers if we succeed. Add basic async signer tests Adds a `get_signer` method to the context so that a test can get ahold of the channel signer. Adds a `set_available` method on the `TestChannelSigner` to allow a test to enable and disable the signer: when disabled some of the signer's methods will return `Err` which will typically activate the error handling case. Adds a `set_channel_signer_available` function on the test `Node` class to make it easy to enable and disable a specific signer. Adds a new `async_signer_tests` module: * Check for asynchronous handling of `funding_created` and `funding_signed`. * Check that we correctly resume processing after awaiting an asynchronous signature for a `commitment_signed` event. * Verify correct handling during peer disconnect. * Verify correct handling for inbound zero-conf. Added `temporary_channel_id` to `create_channel`. By default, LDK will generate the initial temporary channel ID for you. However, in certain cases, it's desirable to have a temporary channel ID specified by the caller in case of any pre-negotiation that needs to happen between peers prior to the channel open message. For example, LND has a `FundingShim` API that allows for advanced funding flows based on the temporary channel ID of the channel. This patch adds support for optionally specifying the temporary channel ID of the channel through the `create_channel` API. public create_payment_onion in onion_utils export create_onion_message and peel_onion_message from ln::onion_message refactor to remove message_digest We change the Bolt12Invoice struct to carry a tagged hash. Because message_digest is then only used in one place, we can inline it in the TaggedHash constructor. expose more granular data in TaggedHash struct Expose tag and merkle root fields in the TaggedHash struct. Avoid an unnecessary allocation in `TaggedHash` A well-formed tag is always a constant, so allocating to store it is unnecessary when we can just make the tag a `&'static str`. Update fuzzing instructions for libFuzzer/cargo-fuzz Fix potential cases where max_dust_htlc_exposure_msat overflows Include counterparty skimmed fees in PaymentClaimed event. Link to LSP spec in accept_underpaying_htlcs config Reduce on-startup heap frag due to network graph map/vec doubling When we're reading a `NetworkGraph`, we know how many nodes/channels we are reading, there's no reason not to pre-allocate the `IndexedMap`'s inner `HashMap` and `Vec`, which we do here. This seems to reduce on-startup heap fragmentation with glibc by something like 100MiB. Prefer `Writeable.encode()` over `VecWriter` use It does the same thing and its much simpler. Pre-allocate send buffer when forwarding gossip When forwarding gossip, rather than relying on Vec doubling, pre-allocate the message encoding buffer. Avoid unnecessarily overriding `serialized_length` ...as LLVM will handle it just fine for us, in most cases. Pre-allocate the full `Vec` prior to serializing as a `Vec<u8>` We end up generating a substantial amount of allocations just doubling `Vec`s when serializing to them, and our `serialized_length` method is generally rather effecient, so we just rely on it and allocate correctly up front. Add an option to in-place decrypt with `ChaCha20Poly1305` In the next commit we'll use this to avoid an allocation when deserializing messages from the wire. Avoid unnecessarily alloc'ing a new buffer when decrypting messages When decrypting P2P messages, we already have a read buffer that we read the message into. There's no reason to allocate a new `Vec` to store the decrypted message when we can just overwrite the read buffer and call it a day. Use `VecDeque`, rather than `LinkedList` in peer message buffering When buffering outbound messages for peers, `LinkedList` adds rather substantial allocation overhead, which we avoid here by swapping for a `VecDeque`. Avoid re-allocating to encrypt gossip messages when forwarding When we forward gossip messages, we store them in a separate buffer before we encrypt them (and commit to the order in which they'll appear on the wire). Rather than storing that buffer encoded with no headroom, requiring re-allocating to add the message length and two MAC blocks, we here add the headroom prior to pushing it into the gossip buffer, avoiding an allocation. Avoid a `tokio::mpsc::Sender` clone for each P2P send operation Whenever we go to send bytes to a peer, we need to construct a waker for tokio to call back into if we need to finish sending later. That waker needs some reference to the peer's read task to wake it up, hidden behind a single `*const ()`. To do this, we'd previously simply stored a `Box<tokio::mpsc::Sender>` in that pointer, which requires a `clone` for each waker construction. This leads to substantial malloc traffic. Instead, here, we replace this box with an `Arc`, leaving a single `tokio::mpsc::Sender` floating around and simply change the refcounts whenever we construct a new waker, which we can do without allocations. Avoid allocating when checking gossip message signatures When we check gossip message signatures, there's no reason to serialize out the full gossip message before hashing, and it generates a lot of allocations during the initial startup when we fetch the full gossip from peers. Stop writing signer data as a part of channels This breaks backwards compatibility with versions of LDK prior to 0.0.113 as they expect to always read signer data. This also substantially reduces allocations during `ChannelManager` serialization, as we currently don't pre-allocate the `Vec` that the signer gets written in to. We could alternatively pre-allocate that `Vec`, but we've been set up to skip the write entirely for a while, and 0.0.113 was released nearly a year ago. Users downgrading to LDK 0.0.112 and before at this point should not be expected. Update MuSig2 dependency for Hash trait derivation. Add Splicing (and Quiescence) wire message definitions Rely on const generic big arrays for `PartialEq` in msgs Implementation of standard traits on arrays longer than 32 elements was shipped in rustc 1.47, which is below our MSRV of 1.48 and we can use to remove some unnecessary manual implementation of `PartialEq` on `OnionPacket`. `derive(Hash)` for P2P messages In other languages (Java and C#, notably), overriding `Eq` without overriding `Hash` can lead to surprising or broken behavior. Even in Rust, its usually the case that you actually want both. Here we add missing `Hash` derivations for P2P messages, to at least address the first pile of warnings the C# compiler dumps. Log the error, when trying to forward the intercepted HTLC, but the channel is not found Don't send init `closing_signed` too early after final HTLC removal If we remove an HTLC (or fee update), commit, and receive our counterparty's `revoke_and_ack`, we remove all knowledge of said HTLC (or fee update). However, the latest local commitment transaction that we can broadcast still contains the HTLC (or old fee), thus we are not eligible for initiating the `closing_signed` negotiation if we're shutting down and are generally expecting a counterparty `commitment_signed` immediately. Because we don't have any tracking of these updates in the `Channel` (only the `ChannelMonitor` is aware of the HTLC being in our latest local commitment transaction), we'd previously send a `closing_signed` too early, causing LDK<->LDK channels with an HTLC pending towards the channel initiator at the time of `shutdown` to always fail to cooperatively close. To fix this race, we add an additional unpersisted bool to `Channel` and use that to gate sending the initial `closing_signed`. Replace maze of BOLT11 payment utilities with parameter generators `lightning-invoice` was historically responsible for actually paying invoices, handling retries and everything. However, that turned out to be buggy and hard to maintain, so the payment logic was eventually moved into `ChannelManager`. However, the old utilites remain. Because our payment logic has a number of tunable parameters and there are different ways to pay a BOLT11 invoice, we ended up with six different methods to pay or probe a BOLT11 invoice, with more requested as various options still were not exposed. Instead, here, we replace all six methods with two simple ones which return the arguments which need to be passed to `ChannelManager`. Those arguments can be further tweaked before passing them on, allowing more flexibility. Drop old `expiry_time_from_unix_epoch` helper in expiry time lookup Since there's a much simpler way to go about it with `Bolt11Invoice::expires_at`. Drop non-anchor channel fee upper bound limit entirely Quite a while ago we added checks for the total current dust exposure on a channel to explicitly limit dust inflation attacks. When we did this, we kept the existing upper bound on the channel's feerate in place. However, these two things are redundant - the point of the feerate upper bound is to prevent dust inflation, and it does so in a crude way that can cause spurious force-closures. Here we simply drop the upper bound entirely, relying on the dust inflation limit to prevent dust inflation instead. Impl display for invoice fields Make invoice fields public Have Invoice Description use UntrustedString peel_payment_onion static fn in channelmanager remove obsolete comment InboundOnionErr fields public Bump rust-bitcoin to v0.30.2 Remove nightly warnings Drop panic if `rust-bitcoin` adds a new `Network` `rust-bitcoin` 0.30 added `#[non_exhaustive]` to the `Network` enum, allowing them to "add support" for a new network type without a major version change in the future. When upgrading, we added a simple `unreachable` for the general match arm, which would break in a minor version change of `rust-bitcoin`. While it seems [possible rust-bitcoin will change this](rust-bitcoin/rust-bitcoin#2225), we still shouldn't ba panicking, which we drop here in favor of a `debug_assert`ion, and a default value. Remove now-redundant checks in BOLT12 `Invoice` fallback addresses Now that we use the `rust-bitcoin` `WitnessProgram` to check our addresses, we can just rely on it, rather than checking the program length and version. Explicitly reject routes that double-back - If a path within a route passes through the same channelID twice, that shows the path is looped and will be rejected by nodes. - Add a check to explicitly reject such payment before trying to send them. Add test for PathParameterError introduced in previous commit - Also modify the unwrap_send_err!() macro to handle the PathParameterError Return confirmation height via `Confirm::get_relevant_txids` We previously included the block hash, but it's also useful to include the height under which we expect the respective transaction to be confirmed. Use upstream `TestLogger` util in tx sync tests Improve `EsploraSyncClient` logging We give some more information while reducing the log levels to make the logging less spammy. We also convert one safe-to-unwrap case from returning an error to unwrapping the value. Improve `EsploraSyncClient` test coverage In particular, we now test `register_output` functionality, too. Move `sync_` methods to `SyncState` Set `pending_sync` when last-minute check fails in Esplora Implement `ElectrumSyncClient` Add Electrum integration test DRY up Esplora/Electrum `integration_tests` Use `esplora-client`'s `async-https-rustls` feature Now that we upgraded `esplora-client` to 0.6 we can use `async-https-rustls` instead of manually overriding the `reqwest` dependency. Implement struct wrappers for channel key types to avoid confusion. Currently all channel keys and their basepoints exist uniformly as `PublicKey` type, which not only makes in harder for a developer to distinguish those entities, but also does not engage the language type system to check if the correct key is being used in any particular function. Having struct wrappers around keys also enables more nuanced semantics allowing to express Lightning Protocol rules in language. For example, the code allows to derive `HtlcKey` from `HtlcBasepoint` and not from `PaymentBasepoint`. This change is transparent for channel monitors that will use the internal public key of a wrapper. Payment, DelayedPayment, HTLC and Revocation basepoints and their derived keys are now wrapped into a specific struct that make it distinguishable for the Rust type system. Functions that require a specific key or basepoint should not use generic Public Key, but require a specific key wrapper struct to engage Rust type verification system and make it more clear for developers which key is used. Add channel_keys_id as param in get_destination_script This enables implementers to generate a different destination script for each channel. Add `channel_keys_id` to `SpendableOutputDescriptor::StaticOutput` In 7f0fd86, `channel_keys_id` was added as an argument to `SignerProvider::get_destination_script`, allowing implementors to generate a new script for each channel. This is great, however users then have no way to re-derive the corresponding private key when they ultimately receive a `SpendableOutputDescriptor::StaticOutput`. Instead, they have to track all the addresses as they derive them separately. In many cases this is fine, but we should support both deployments, which we do here by simply including the missing `channel_keys_id` for the user. Rename SignerProvider's Signer to EcdsaSigner. Introduce TaprootSigner trait. For Taproot support, we need to define an alternative trait to EcdsaChannelSigner. This trait will be implemented by all signers that wish to support Taproot channels. Add TaprootSigner variant to SignerProvider. Previously, SignerProvider was not laid out to support multiple signer types. However, with the distinction between ECDSA and Taproot signers, we now need to account for SignerProviders needing to support both. This approach does mean that if ever we introduced another signer type in the future, all implementers of SignerProvider would need to add it as an associated type, and would also need to write a set of dummy implementations for any Signer trait they do not wish to support. For the time being, the TaprootSigner associated type is cfg-gated. Reparametrize ChannelSignerType by SignerProvider. ChannelSignerType is an enum that contains variants of all currently supported signer types. Given that those signer types are enumerated as associated types in multiple places, it is prudent to denote one type as the authority on signer types. SignerProvider seemed like the best option. Thus, instead of ChannelSignerType declaring the associated types itself, it simply uses their definitions from SignerProvider. Move ECDSA-specific signers into ecdsa.rs To separate out the logic in the `sign` module, which will start to be convoluted with multiple signer types, we're splitting out each signer type into its own submodule, following the taproot.rs example from a previous commit. Gate Taproot-related todos behind cfg flag. Remove superfluous commitment_number parameter. Move validate_counterparty_revocation to ChannelSigner. Remove unused Taproot import. Fix `data_loss_protect` test to actually test DLP The data loss protect test was panicking in a message assertion which should be passing, but because the test was marked only `#[should_panic]` it was being treated as a successful outcome. Instead, we use `catch_unwind` on exactly the line we expect to panic to ensure we are hitting the right one. Clean up error messages and conditionals in reestablish handling When we reestablish there are generally always 4 conditions for both local and remote commitment transactions: * we're stale and have possibly lost data * we're ahead and the peer has lost data * we're caught up * we're nearly caught up and need to retransmit one update. In especially the local commitment case we had a mess of different comparisons, which is improved here. Further, the error messages are clarified and include more information. Handle missing case in reestablish local commitment number checks If we're behind exactly one commitment (which we've revoked), we'd previously force-close the channel, guaranteeing we'll lose funds as the counterparty has our latest local commitment state's revocation secret. While this shouldn't happen because users should never lose data, sometimes issues happen, and we should ensure we always panic. Further, `test_data_loss_protect` is updated to test this case. move static channelmanager functions into their own file Parse blinding point in UpdateAddHTLC A blinding point is provided in update_add_htlc messages if we are relaying or receiving a payment within a blinded path, to decrypt the onion routing packet and the recipient-provided encrypted payload within. Will be used in upcoming commits. onion_utils: extract decrypting faiure packet into method Will be used in the next commit to parse onion errors from blinded paths in tests only. Parse blinded onion errors in tests only. So we can make sure they're encoded properly. Persist outbound blinding points in Channel A blinding point is provided in update_add_htlc messages if we are relaying or receiving a payment within a blinded path, to decrypt the onion routing packet and the recipient-provided encrypted payload within. Will be used in upcoming commits. Store whether a forwarded HTLC is blinded in PendingHTLCRouting We need to store the inbound blinding point in PendingHTLCRouting in order to calculate the outbound blinding point. The new BlindedForward struct will be augmented when we add support for forwarding as a non-intro node. Persist whether an HTLC is blinded in HTLCPreviousHopData. Useful so we know to fail blinded intro node HTLCs back with an invalid_onion_blinding error per BOLT 4. Another variant will be added to the new Blinded enum when we support receiving/forwarding as a non-intro node. Set HTLCPreviousHopData::blinded on intro node forward. Useful so we know to fail back blinded HTLCs where we are the intro node with the invalid_onion_blinding error per BOLT 4. We don't set this field for blinded received HTLCs because we don't support receiving to multi-hop blinded paths yet, and there's no point in setting it for HTLCs received to 1-hop blinded paths because per the spec they should fail back using an unblinded error code. Parameterize Channel's htlc forward method by outbound blinding point Used in the next commit to set the update_add blinding point on HTLC forward. Set update_add blinding point on HTLC forward Used by the next hop to decode their blinded onion payload. Parse blinded forward-as-intro onion payloads Previously, we only parsed blinded receive payloads. Support forwarding blinded HTLCs as intro node. Error handling will be completed in upcoming commits. Remove now-unused Readable impl for ReceiveTlvs Test blinded forward failure to calculate outbound cltv expiry Intro node failure only. Test blinded forwarding payload encoded as receive error case Correctly fail back on outbound channel check for blinded HTLC Forwarding intro nodes should always fail with 0x8000|0x4000|24. Extract blinded route param creation into test util Correctly fail back blinded inbound fwd HTLCs when adding to a Channel As the intro node. Correctly fail back downstream-failed blinded HTLCs as intro Test intro node blinded HTLC failing in process_pending_htlc_fwds. Test intro node failing blinded intercept HTLC. Add release note for blinded HTLC backwards compat. Test blinding point serialization in Channel. adding #
NACK, I believe having the enum As such I believe the parameter is application-specific and cannot be universal unless the application is specifically built to support all possible networks. (This is possible, I did this for Firefish by using magic as the network identifier when serialized and native parsing methods for stringly inputs.) And if an application is not made universal then it has to have its own type to represent the network it supports, with conversions and such. I'm aware this is more annoying but like many other things in Rust, it prevents problems by dealing with these things upfront. I'm still interested in making the API easier. For instance I think using Another approach we could take for selected applications is to add methods that convert values to and from their representation. E.g. we could add methods to return LN currency string. But the conditions for inclusion would be very strict: we would have to be convinced that whenever Core releases support for a new network the appropriate teams will define how this is mapped to their domain ASAP. So e.g. if Core adds We could also provide a macro that makes it easier to define your own network types if it helps. |
IIUC, you're suggesting that dependent crates should be kept from using the |
They already suffer the issue, they just wouldn't need to convert between networks inside business rules but at the boundary which is good. I don't see how it's related to strings, since the crates can (and should) define their own strong types. Also if there's a collection of crates that is guaranteed to support some specific set of networks they should share a type. |
That's not accurate - it doesn't force users to deal with a new network up front, it forces users to deal with an unknown network up front. If we remove |
Another way to look at rust-bitcoin's |
It doesn't force users to do that. It's entirely possible to design an application that uses no The fact that you perceive it as forcing stems from you happening to work on a library where the protocol itself forces you to do something different. And because you're forced to use a specific set of networks you should define your own - which you did with the For the opposite conversion, I have verified that we never return let currency = match Network::from_chain_hash(chain_hash) {
Some(Network::BItcoin) => Currency::Bitcoin,
Some(Network::Regtest) => Currency::Regtest,
Some(Network::Testnet) => Currency::Testnet,
Some(Network::Signet) => Currency::Signet,
_ => return Err(UnknownChainHash(chain_hash)),
} The code is pretty elegant IMO.
It's an unknown network for the protocol that doesn't define it. But what's worse it will force breakage across the entire ecosystem and cause coordination nightmares whenever a new network is added. We're trying to bring the crate to the state where each crate can rely on 1.0 primitives and never worry about breakages again. Isn't it worth a few more lines in your code?
I disagree with this view in two ways. First, I personally see very little value in supporting anything else than mainnet and regtest for most applications. Regtest is million times easier to work with than everything else when it comes to testing. So I don't understand why the type should have other variants if this is its definition. Second, the type really is "whatever Core supports" this can be clearly seen from having |
As you note, this is incredibly common. Sadly Bitcoin is filled with people defining their own network enum. Ensuring convertibility between them is a critical part of the
For example, given the BOLT 11 set is congruous with the rust-bitcoin set, I'd like to drop the
Fair enough, I'd be fine with that too. But that also doesn't need |
Really? Can you point to specific examples? I haven't seen any other than
It already is by simply the applications and, more importantly, protocols not being designed to be network-agnostic. You can't change it by making the enum not
I'm willing to accept it for many other things. But after fighting so hard to get this crate to stabilize one day being told that's not going to happen is absolutely horrible to hear. If that's the position of other maintainers I might be better off forking the crate.
Only today. It's perfectly possible it won't be in the future. E.g. I have an example of a network that I believe could be interesting: signed like signet but zero difficulty like regtest. And when that happens we get ecosystem-wide breakage forcing everyone to update or be incompatible.
Yeah, that's the incorrect thing. You should store |
I'm really confused why
If that becomes commonly used enough that it merits inclusion in Bitcoin Core and rust-bitcoin then it probably is!
The whole point of that paragraph was me pointing out that it would be totally sensible to drop |
One note about the desire to drop |
Can this debate be reduced to the following question:
Its a trade off, right? Upgrade to |
@TheBlueMatt I took a look at LDK and I found that nothing is broken, just I've searched for other instances of matching on @tcharding yes, that summary is accurate, though maybe |
I think that's generally the case, yes. I think its totally reasonable to expect some amount of pain to use a whole new network, because it won't just impact the |
Great, so now we've pushed the dead-code error handling from rust-bitcoin down into LDK and then pushed it further into the code of our users, rather than just not having to handle the non-case at all :) |
I think there's one secondary concern beyond just "should supporting a new network require a new rust-bitcoin release" - in my previous comment I noted that
If we think this is a reasonable use of the |
Those are entirely contained within
They already have to handle invalid inputs when they parse and if they use
I don't think so. Either the library is written to work with any network in which case they can be updated with no code change at all or the library for some reason cannot support other networks in which case it will just return errors for the unknown ones and the applications using them will propagate those errors. Nothing really has to be broken. I was thinking of putting the network into a separate crate (with whatever other things are likely to break for stupid reasons like someone else releasing something) and then make the crate optional everywhere but I don't think it works since it forces Anyway, since you're annoyed about |
No library can be written to handle "any hypothetical future network" but it's easy in most cases to write a library that supports all the existing networks. I agree with Matt that adding new networks is (a) extremely rare, and (b) likely to break things beyond the enum, even if those are things that 99% of users will never touch. So by making the enum |
Funny that you say this because I wrote such library. :) It is perfectly possible if there's no silly protocol that would invent its own representations for networks. In my case I encode I think we can decouple Still it sucks that any software that did this right will get punished because majority of stuff is badly designed. |
This sounds like you throw away the type information from |
That's only done during parsing which is already fallible so it doesn't matter (and the compiler will easily remove the dead branch).
Is there an actual real code that needs that though? Even LDK, a super-big crate, didn't have a single line that needed to be changed apart |
Apologies, I stepped away from this issue because I was feeling incredibly frustrated, and then got busy with other things and eventually holidays.
This is an oversimplification. Sure, if you're only doing very trivial things maybe, but what happens if we add a
Yes, the LDK multi-language bindings expose a concept of |
They don't. Applications that want to support the new network need to switch from Libraries meanwhile can continue to use whatever combination of |
Please forward those complaints to us. Just knowing the number of them and not their kind or "quality" isn't very helpful.
That can only happen if the LN developers give up their authority to define HRP for invoices to the rust-bitcoin developers. Let's say they are lazy/forgetful just like whoever didn't assign a special signet HRP for bech32, what will you do then? There logically cannot be one enum. At best there can be two: one for LN -
OK, let's try to progress on this then. I'll try to split it up after
Well, the funny part is if they all have the same enum they just trade immediate benefit for significant breakages later. And it's likely they don't realize it. It's the same old story about Rust: some API looks too complex and later you realize it's because of some reason you didn't even know about and it actually makes sense. And then people say things like Rust is difficult or complain at various forums where more experienced people have to explain to them that if they don't want the safety/security benefits provided by Rust maybe they should use something else... (Not trying to shame anyone here, just try to look at things from Rust perspective. I understand it takes time.) |
I believe they're all in this thread, both the other issue and the several instances of LDK preferring to use
Oh come on. Back in the real world, there are a very small number of
No, they don't, this assumes some future where rust-bitcoin starts supporting a new network and all the other crates upgrade, but also don't want to support the new network and somehow aren't able to invest the time to create a new enum at that juncture. Please let's stop making up fanciful scenarios of how new networks may come to be that just aren't realistic and not design our API around it. |
I'm really struggling to understand this point of view and I've read this entire thread. If the enum is It seems like if we want to imagine that new future networks will match existing networks in significant ways (e.g. using bech32m but with different HRPs, using the same network protocol but with different constants) then we can make our imagining explicit by using |
I mean, I would like to see specific code and more examples of it. What I want to know is if all of them are "I can't be bothered to write my own" or if there are more serious issues. It's the exact thing you pointed out: I see mainly rust-bitcoin and not that much of wider ecosystem.
In real world we have a ton of incoherent garbage because someone didn't specify legacy address prefix when coming up with regtest, then someone didn't specify HRP when coming up with signet, then I remember hearing the term simnet which isn't even in Core and now I'm supposed to believe you that lightning developers are the bright exception that will swiftly standardize a unique HRP for any new network that is added to Core? Even though there are multiple teams that already have hard time agreeing on things? I'd love to see some track record that would prove otherwise and I'd absolutely love not having 3+ different enums in the codebase but if this is the reality, rather than hiding from it and hitting problems down the line I want it represented in the type system today so I can be confident it won't break horribly in the future.
It doesn't need to assume the latter because if the enum is exhaustive then a single crate upgrading will break everything unless every single upstream crate wants to backport everything just to keep the old And even if we did this, the history says that most likely there will be some ridiculous edge case like unspecified HRP and people will start using an existing one and then things stop roundtripping and you'll be pulling your hair out trying to clean up the mess.
I know the theoretical problems with
Seems pretty clear to me: only changes to how blocks headers are validated and only soft-forks when it comes to other changes. (Note that every app already must assume soft forks.) But surely, we should document that. Also even if it's untestable, anything that's not mainnet is not a big deal and mainnet is already defined.
Yes, I want that.
Yes, unfortunately, nothing encourages them to be - we wouldn't have this discussion otherwise. But whatever, that's on them. If |
From some test code within Liquid, setting up a bitcoin.conf file section for a specific network match self.network {
Some(bitcoin::Network::Bitcoin) | None => {},
Some(bitcoin::Network::Testnet) => {
writeln!(w, "testnet=1")?;
if version > 17_00_00 {
writeln!(w, "[testnet]")?;
}
}
Some(bitcoin::Network::Signet) => {
writeln!(w, "signet=1")?;
if version > 17_00_00 {
writeln!(w, "[signet]")?;
}
}
Some(bitcoin::Network::Regtest) => {
writeln!(w, "regtest=1")?;
if version > 17_00_00 {
writeln!(w, "[regtest]")?;
}
}
} What should we do here in case of "unknown network"? (This is test-harness code so on some level it doesn't matter, but it's easy to imagine something like this being used in production.)
So, we'd never have a test network with confidential transactions, say, or which replaced all the crypto with post-quantum equivalents, or more generally which tried some big change without bothering to define and test a soft-fork transition mechanism? You may be correct. But it's not a promise we can make from the rust-bitcoin API. |
Why not use |
In this case that would happen to work, because And any future network would still need to be slightly different, because at the very least there would be no pre-0.17 syntax to generate and we'd presumably need to panic or something there. So basically, yes, we could, but we'd still need to make assumptions about the future network, and the comments describing these assumptions would be longer and less reliable than the existing code. |
If a new network is added you'll have to "panic or something" on older versions anyway so you might a s well write the code now. But I don't see a versions check for the network in the code you posted so either it's a bug in your code or you intentionally don't care about it. But also, this just gave me an idea that we should have a method returning the version Core added the network in. (Regardless of your use case.) If you think making these assumptions is unacceptable for you then you really should make your own enum for networks which will give you option to upgrade whenever you need it rather than being forced to either change the code or miss new features in |
A version check for what network?
Yes. If we were using This is why I continue not to see any "libcopalypse" scenario happening, nor do I see a problem with just adding a |
Something like
It's less of a problem if we do it right and also less of a problem for consumers of downstream crates that do it right. I'm not hyped enough to say it's non-issue in that case but it might be. |
Core is never going to update so that existing configuration files stop working. So I'm not sure what a Core version check would accomplish.
If "do it right" means putting |
Very similar thing your
No, it's doing all these traits and making sure nothing returns it etc... |
Ok, perfect, we are in agreement there :). So if we do things right in rust-bitcoin, the
In fact, we could do any combination of these since none of them actually interfere with each other (though it would probably be confusing!). My personal view at this point is
I continue to think we should have an exhaustive enum, with a bunch of documentation about how to use it properly. I am warming up to the "don't have an enum, just have rules for defining your own" but I think this would be needless work for people who just want to grab an off-the-shelf set of networks or people who want to make their own enum but want to see an example of how to do it first. |
Yeah, pretty much it. I want combination of having
Yep. |
Cool, if I'm following, we have rough consensus here now, and a solution on how to progress. We can get this done before next release, right? @TheBlueMatt are you in agreement here or have we missed something? If we haven't I'll get to work. This still leaves v31 as an annoying release for users though. Can we resolve that, remove |
I doubt so, I suspect we need to progress with crate splitting more. But if you find a way to do it I'm fine with it.
And we'll be putting it back in the next one and then people will have to fix their code again. It's not really worth it. |
Fair and fair. I'll just do nothing then :) |
What would the crate dep tree look like here? Would we have two separate crates and rust-bitcoin depends on both of them? Where does |
Currently I'm not sure if the I think we'll see it more clearly when we're closer. There's other stuff to do now. |
Ok, so crate(s) that depend on rust-bitcoin, rather than the other way around. Ok, makes sense. I suspect both the non-exhaustive and the exhaustive one could go into the same crate -- aside from naming conflicts. Because the non_exhaustive type can be expanded in minor versions, and when the exhaustive type forces major versions, we could semver-trick the non-exhaustive one to prevent breakage. Maaaybe we could even use the same type but have exhaustiveness be feature-gated (this would be a super weird feature-gate that'd turn off the semver trick, so maybe it just doesn't work). Anyway agreed that there are higher priorities right now and happy to revisit when we're closer. |
They can't because they have to be versioned differently. Semver trick would have to maintain old versions indefinitely and cause crate duplication which would trigger CI checks and people would then have to add exceptions. Semver trick is great but not having to use it in the first place is better.
Strictly speaking, removing |
Removing the milestone from this issue, we have #2541 to hopefully alleviate the current pain. |
Sure, I'm entirely burnt out on this issue and don't really have the bandwidth to argue more. The suggested workaround makes sense to get to to me. |
f6467ac Minimize usage of Network in public API (Tobin C. Harding) 3ec5eff Add Magic::from_params (Tobin C. Harding) Pull request description: Minimize usage of the `Network` enum in the public API. See #2225 for context, and #1291 (comment) for an interpretation of that long discussion. Close: #2169 ACKs for top commit: sanket1729: reACK f6467ac. apoelstra: ACK f6467ac Tree-SHA512: f12ecd9578371b3162382a9181f7f982e4d0661915af3cfdc21516192cc4abb745e1ff452649a0862445e91232f74287f98eb7e9fc68ed1581ff1a97b7216b6a
In 0.30,
#[non_exhaustive]
was added to theNetwork
enum in what I assume was a step towards stabilization with forwards compatibility.#[non_exhaustive]
doesn't, itself, provide forwards compatibility with minor version changes, but rather forces downstream devs to think about it, and puts the burden for forwards compatibility on them. This only works if downstream projects have to be able to have some kind of sensible default behavior for "unknown future bitcoin-style network", which isn't super practical. For example, this came up in thelightning-invoice
crate, we have to convert from the rust-bitcoinNetwork
to a BOLT11-specificCurrency
- we cannot reasonably have forward-compatible code that will "just work" if a new enum variant is added, so anunreachable
was added. That will obviously be removed, but there's really nothing we can do to be forwards compatible here, and instead would strongly prefer to see a version bump of rust-bitcoin so that we can update our code to handle entire new bitcoin-compatible networks.The text was updated successfully, but these errors were encountered: