diff --git a/CHANGELOG.md b/CHANGELOG.md index a35f38d05a6..04801919332 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,387 @@ +# 0.2 - TODO: Add release date, 2025 - "Natively Asynchronous Splicing" + +## API Updates + + * Splicing is now supported. The current implementation is expected to be + compatible with Eclair and future versions of CLN, but may change feature + signaling in a future version as testing completes, breaking compatibility. + Support for accepting splices is gated on + `UserConfig::reject_inbound_splices`. Outbound splices can be initiated with + `ChannelManager::splice_channel`. + * Various APIs have been updated to offer a native Rust async API. All + newly-async traits and structs have a `*Sync` variant which offers the + same API but with sync methods: + * `KVStore` has been made async. Note that `KVStore` methods are not + `async fn`, but rather write ordering is fixed when the methods return, + though write completion is async. + * `BumpTransactionEventHandler` is now backed by an async `WalletSource` (and + `Wallet`) or an async `CoinSelectionSource` and is now async. Sync versions + are in the new `events::bump_transaction::sync` submodule (#3752). + * `OutputSweeper` is now backed by an async `KVStore` and + `ChangeDestinationSource` and is now async (#3819, #3734, #4131). + * `MonitorUpdatingPersisterAsync` and `ChainMonitor::new_async_beta` were + added for async `ChannelMonitor` persistence. Note that this feature is + still considered beta (#4063). + * An initial version of async payments is now supported. The current + implementation is specific to LDK and only LDK supports paying static + invoices. However, because BOLT 12 invoice requests will gracefully "upgrade" + to a non-static invoice when the recipient comes online, offers backed by + static invoices are expected to be payable by any BOLT 12 payer. + With an LDK-based LSP, often-offline clients should set + `UserConfig::hold_outbound_htlcs_at_next_hop` and call + `ChannelManager::set_paths_to_static_invoice_server`. + LDK-based LSPs wishing to support often-offline senders and recipients should + set `UserConfig::enable_htlc_hold`, support the existing "onion message + mailbox" feature (setting `intercept_messages_for_offline_peers` on + `OnionMessenger` and handling `Event::OnionMessageIntercepted`s), and handle + `Event::PersistStaticInvoice`s and `Event::StaticInvoiceRequested`s. + * Zero-Fee-Commitment channels are now supported in LDK. These channels remove + force-closure risk for feerate disagreements by using a fixed, zero fee on + pre-signed transactions, relying on anchor bumps instead. They also utilize + the new TRUC + ephemeral dust policy in Bitcoin Core 29 to substantially + improve the lightning security model. This requires having a path of Bitcoin + Core 29+ nodes between you and a miner for transactions to be mined. This + only works with LDK peers, and feature signaling may change in a future + version of LDK, breaking compatibility. This is negotiated automatically for + manually-accepted inbound channels and negotiated for outbound channels based + on `ChannelHandshakeConfig::negotiate_anchor_zero_fee_commitments`. + * `Event::BumpTransaction` is now always generated even if the transaction has + sufficient fee. This allows you to manage transaction broadcasting more + granularly for anchor channels (#4001). + * The local key which receives non-HTLC-encumbered funds when the counterparty + force-closes a channel is now one of a static list of 1000 keys when using + `KeysManager` if `v2_remote_key_derivation` is set or after splicing (#4117). + * LSPS5 support was added, providing a push notification API for LSPS clients. + * Client-trusts-LSP is now supported on LSPS2 service (#3838). + * `LSPS2ClientEvent` now has events for failure events (#3804). + * `LSPS2ServiceHandler::channel_open_abandoned` was added (#3712). + * `Event::PendingHTLCsForwardable` has been replaced with regular calls to + `process_pending_htlc_forwards` in the background processor while + `ChannelManager::needs_pending_htlc_processing` is true. The delay between + calls (and, thus, HTLC forwarding delay) is random between zero and 200ms, + averaging 50ms, faster than the previous recommendation (#3891, #3955). + * `Event::HTLCHandlingFailed`s now include a`LocalHTLCFailureReason`, providing + much more granular reasons for HTLCs having been failed (#3744, etc). + * `Event::HTLCHandlingFailed` is now generated any time forwarding an HTLC + fails, i.e. including cases where the HTLC onion is invalid (#2933). + * `Event::HTLCHandlingFailed::failure_type` of `UnknownNextHop` has been + deprecated and is no longer generated (#3700). + * `OffersMessageFlow` was introduced to make it easier to implement most of the + BOLT 12 flows without using a `ChannelManager` (#3639). + * `ChannelManager::pay_for_bolt11_invoice` was added (#3617). + * `ChannelManager::pay_for_offer_from_human_readable_name` has been deprecated + in favor of the `bitcoin-payment-instructions` and + `ChannelManager::pay_for_offer_from_hrn`. Language bindings users may still + wish to use the original (#3903, #4083). + * `lightning::util::anchor_channel_reserves` was added to assist in estimating + on-chain fund requirements for anchor channel closures (#3487). + * Using both asynchronous and synchronous `ChannelMonitor[Update]` persistence + on the same `ChannelManager` will now panic. This never functioned correctly + and is now detected to prevent issues (#3737). + * LDK can now validate if HTLC errors have been tampered with (once nodes + upgrade). It also reports and logs the amount of time an HTLC was held so + that (as nodes upgrade) slow nodes can be found (#2256, #3801, other fixes). + * Repeated `Listen::block_disconnected` calls for each disconnected block in a + reorg have been replaced with a single `blocks_disconnected` call with the + fork point block (i.e. the highest block on both sides of the reorg, #3876). + * `lightning::routing::scoring::CombinedScorer` was added to combine scoring + data between remote scoring info and local payment results (#3562). + * LDK will now store up to 1KiB of "peer storage" data in `ChannelManager` per + peer with which we have a funded channel (#3575). + * The `Persister` trait was removed. You can match on namespace constants in + `KVStore` to restore custom logic for specific storage objects (#3905). + * `BlindedMessagePath::new_with_dummy_hops` was added (but is not used by + default, #3726). You can use `NodeIdMessageRouter` to enable dummy hops. + * `ProbabilisticScoringFeeParameters::probing_diversity_penalty` was added to + allow for better information gathering while probing (#3422, #3713). + * `Persist` now takes a `MonitorName` rather than a `funding_txo` `OutPoint` to + ensure the storage key is consistent across splices (#3569). + * `lightning-liquidity` now supports persisting relevant state (#4059, #4118). + * `ChannelManager::funding_transaction_generated_manual_broadcast` was added to + open a channel without automatically broadcasting the funding transaction + (#3838). In it and `unsafe_manual_funding_transaction_generated` + force-closure logic has been updated to no longer automatically broadcast the + commitment tx unless the funding transaction has been seen on-chain (#4109). + * Various instances of channel closure which provided a + `ClosureReason::HolderForceClosed` now provide more accurate + `ClosureReason`s, especially `ClosureReason::ProcessingError` (#3881). + * A new `ClosureReason::LocallyCoopClosedUnfundedChannel` was added (#3881). + * Some arguments to `ChannelManager::pay_for_offer[_from_human_readable_name]` + have moved behind `optional_params` (#3808, #3903). + * `Event::PaymentSent::bolt12_invoice` was added for proof-of-payment (#3593). + * Channel values are now synchronized via RGS, improving scoring (#3924). + * `SendOnlyMessageHandler` was added, implemented for `ChainMonitor`, and + an instance added to `MessageHandler`. Note that `ChainMonitor` does not yet + send any messages, though will in the future (#3922). + * `lightning_background_processor::NO_{ONION_MESSENGER,LIQUIDITY_MANAGER}` were + added to simplify background processor init without some args (#4100, #4132). + * `ChannelManager::set_current_config` was added (#4038). + * Onion messages received to a blinded path we generated are now authenticated + implicitly rather than explicitly in blinded path `Context`s (#3917, #4144). + * `OMNameResolver::expire_pending_resolution` has been added for those who + cannot or do not wish to call `new_best_block` regularly (#3900). + * `lightning-liquidity`'s LSPS1 client now supports BOLT 12 payments (#3649). + * `LengthReadable::read` has been renamed `read_from_fixed_length_buffer` and + is implemented for all `Readable` (#3579). + * `LengthReadable` is now required to read various objects which consume the + full available buffer (#3640). + * Structs in `lightning-liquidity` were renamed to be globally unique (#3583). + * Renamed `SpendableOutputDescriptor::outpoint` to `spendable_outpoint` (#3634) + +## Performance Improvements + * `ChainMonitor::load_existing_monitor` was added and should be used on startup + to load existing `ChannelMonitor`s rather than via `Persist`, avoiding + re-persisting each `ChannelMonitor` during startup (#3996). + * RGS data application was further sped up (#3581). + +## Bug Fixes + * `FilesystemStore::list` is now more robust against race conditions with + simultaneous `write`/`archive` operations (#3799). + * Pending async persistence of `ChannelMonitorUpdate`s required to forward an + HTLC can no longer result in the HTLC being forgotten if the channel is + force-closed (#3989). + * `lightning-liquidity`'s service support now properly responds to the + `ListProtocols` message (#3785). + * A rare race which might lead `PeerManager` (and `lightning-net-tokio`) to + stop reading from a peer until a new message is sent to that peer has been + fixed (#4168). + * The fields in `SocketAddress::OnionV3` are now correctly parsed, and the + `Display` for such addresses is now lowercase (#4090). + * `PeerManager` is now more conservative about disconnecting peers which aren't + responding to pings in a timely manner. This may reduce disconnections + marginally when forwarding gossip to a slow peer (#4093, #4096). + * Blinded path serialization is now padded to better hide its contents (#3177). + * In cases of incredibly long async monitor update or async signing operations, + LDK may have previously spuriously disconnected peers (#3721). + * Total dust exposure on a commitment now rounds correctly (#3572). + +## Backwards Compatibility + * `ChannelMonitor`s which were created prior to LDK 0.0.110 and which saw no + updates since LDK 0.0.116 may now fail to deserialize (#3638, #4146). + * Setting `v2_remote_key_derivation` on `KeysManager` to true, or splicing a + channel results in using keys which prior versions of LDK do not know how to + derive. This may result in missing funds or panics trying to sweep closed + channels after downgrading (#4117). + * After upgrading to 0.2, downgrading to versions of LDK prior to 0.0.123 is no + longer supported (#2933). + * Upgrading from versions prior to 0.0.116 is not supported (#3604, #3678). + * Upgrading to v0.2.0 will time out any pending async payment waiting for the + often offline peer to come online (#3918). + * Blinded message paths generated by previous versions of LDK, except those + generated for inclusion in BOLT 12 `Offer`s will no longer be accepted. As + most blinded message paths are ephemeral, this should only invalidate issued + BOLT 12 `Refund`s in practice (#3917). + * Once a channel has been spliced, LDK can no longer be downgraded. + `UserConfig::reject_inbound_splices` can be set to block inbound ones (#4150) + * Downgrading after setting `UserConfig::enable_htlc_hold` is not supported + (#4045, #4046). + * LDK now requires the `channel_type` feature in line with spec updates (#3896) + +TODO release stats + + +# 0.1.7 - Oct 21, 2025 - "Unstable Release CI" + +## Bug Fixes + * Builds with the `docsrs` cfg flag (set automatically for builds on docs.rs + but otherwise not used) were fixed. + + +# 0.1.6 - Oct 10, 2025 - "Async Preimage Claims" + +## Performance Improvements + * `NetworkGraph::remove_stale_channels_and_tracking` has been sped up by more + than 20x in cases where many entries need to be removed (such as after + initial gossip sync, #4080). + +## Bug Fixes + * Delivery of on-chain resolutions of HTLCs to `ChannelManager` has been made + more robust to prevent loss in some exceedingly rare crash cases. This may + marginally increase payment resolution event replays on startup (#3984). + * Corrected forwarding of new gossip to peers which we are sending an initial + gossip sync to (#4107). + * A rare race condition may have resulted in outbound BOLT12 payments + spuriously failing while processing the `Bolt12Invoice` message (#4078). + * If a channel is updated multiple times after a payment is claimed while using + async persistence of the `ChannelMonitorUpdate`s, and the node then restarts + with a stale copy of its `ChannelManager`, the `PaymentClaimed` may have been + lost (#3988). + * If an async-persisted `ChannelMonitorUpdate` for one part of an MPP claim + does not complete before multiple `ChannelMonitorUpdate`s for another channel + in the same MPP claim complete, and the node restarts twice, the preimage may + be lost and the MPP payment part may not be claimed (#3928). + +## Security +0.1.6 fixes a denial of service vulnerability and a funds-theft vulnerability. + * When a channel has been force-closed, we have already claimed some of its + HTLCs on-chain, and we later learn a new preimage allowing us to claim + further HTLCs on-chain, we could in some cases generate invalid claim + transactions leading to loss of funds (#4154). + * When a `ChannelMonitor` is created for a channel which is never funded with + a real transaction, `ChannelMonitor::get_claimable_balances` would never be + empty. As a result, `ChannelMonitor::check_and_update_full_resolution_status` + would never indicate the monitor is prunable, and thus + `ChainMonitor::archive_fully_resolved_channel_monitors` would never remove + it. This allows a peer which opens channels without funding them to bloat our + memory and disk space, eventually leading to denial-of-service (#4081). + + +# 0.1.5 - Jul 16, 2025 - "Async Path Reduction" + +## Performance Improvements + * `NetworkGraph`'s expensive internal consistency checks have now been + disabled in debug builds in addition to release builds (#3687). + +## Bug Fixes + * Pathfinding which results in a multi-path payment is now substantially + smarter, using fewer paths and better optimizing fees and successes (#3890). + * A counterparty delaying claiming multiple HTLCs with different expiries can + no longer cause our `ChannelMonitor` to continuously rebroadcast invalid + transactions or RBF bump attempts (#3923). + * Reorgs can no longer cause us to fail to claim HTLCs after a counterparty + delayed claiming multiple HTLCs with different expiries (#3923). + * Force-closing a channel while it is blocked on another channel's async + `ChannelMonitorUpdate` can no longer lead to a panic (#3858). + * `ChannelMonitorUpdate`s can no longer be released to storage too early when + doing async updates or on restart. This only impacts async + `ChannelMonitorUpdate` persistence and can lead to loss of funds only in rare + cases with `ChannelMonitorUpdate` persistence order inversions (#3907). + +## Security +0.1.5 fixes a vulnerability which could allow a peer to overdraw their reserve +value, potentially cutting into commitment transaction fees on channels with a +low reserve. + * Due to a bug in checking whether an HTLC is dust during acceptance, near-dust + HTLCs were not counted towards the commitment transaction fee, but did + eventually contribute to it when we built a commitment transaction. This can + be used by a counterparty to overdraw their reserve value, or, for channels + with a low reserve value, cut into the commitment transaction fee (#3933). + + +# 0.1.4 - May 23, 2025 - "Careful Validation of Bogus States" + +## Bug Fixes + * In cases where using synchronous persistence with higher latency than the + latency to communicate with peers caused issues fixed in 0.1.2, + `ChannelManager`s may have been left in a state which LDK 0.1.2 and later + would refuse to deserialize. This has been fixed and nodes which experienced + this issue prior to 0.1.2 should now deserialize fine (#3790). + * In some cases, when using synchronous persistence with higher latency than + the latency to communicate with peers, when receiving an MPP payment with + multiple parts received over the same channel, a channel could hang and not + make progress, eventually leading to a force-closure due to timed-out HTLCs. + This has now been fixed (#3680). + +## Security +0.1.4 fixes a funds-theft vulnerability in exceedingly rare cases. + * If an LDK-based node funds an anchor channel to a malicious peer, and that + peer sets the channel reserve on the LDK-based node to zero, the LDK-node + could overdraw its total balance upon increasing the feerate of the + commitment transaction. If the malicious peer forwards HTLCs through the + LDK-based node, this could leave the LDK-based node with no valid commitment + transaction to broadcast to claim its part of the forwarded HTLC. The + counterparty would have to forfeit their reserve value (#3796). + + +# 0.1.3 - Apr 30, 2025 - "Routing Unicode in 2025" + +## Bug Fixes + * `Event::InvoiceReceived` is now only generated once for each `Bolt12Invoice` + received matching a pending outbound payment. Previously it would be provided + each time we received an invoice, which may happen many times if the sender + sends redundant messages to improve success rates (#3658). + * LDK's router now more fully saturates paths which are subject to HTLC + maximum restrictions after the first hop. In some rare cases this can result + in finding paths when it would previously spuriously decide it cannot find + enough diverse paths (#3707, #3755). + +## Security +0.1.3 fixes a denial-of-service vulnerability which cause a crash of an +LDK-based node if an attacker has access to a valid `Bolt12Offer` which the +LDK-based node created. + * A malicious payer which requests a BOLT 12 Invoice from an LDK-based node + (via the `Bolt12InvoiceRequest` message) can cause the panic of the + LDK-based node due to the way `String::truncate` handles UTF-8 codepoints. + The codepath can only be reached once the received `Botlt12InvoiceRequest` + has been authenticated to be based on a valid `Bolt12Offer` which the same + LDK-based node issued (#3747, #3750). + + +# 0.1.2 - Apr 02, 2025 - "Foolishly Edgy Cases" + +## API Updates + * `lightning-invoice` is now re-exported as `lightning::bolt11_invoice` + (#3671). + +## Performance Improvements + * `rapid-gossip-sync` graph parsing is substantially faster, resolving a + regression in 0.1 (#3581). + * `NetworkGraph` loading is now substantially faster and does fewer + allocations, resulting in a 20% further improvement in `rapid-gossip-sync` + loading when initializing from scratch (#3581). + * `ChannelMonitor`s for closed channels are no longer always re-persisted + immediately after startup, reducing on-startup I/O burden (#3619). + +## Bug Fixes + * BOLT 11 invoices longer than 1023 bytes long (and up to 7089 bytes) now + properly parse (#3665). + * In some cases, when using synchronous persistence with higher latency than + the latency to communicate with peers, when receiving an MPP payment with + multiple parts received over the same channel, a channel could hang and not + make progress, eventually leading to a force-closure due to timed-out HTLCs. + This has now been fixed (#3680). + * Some rare cases with multi-hop BOLT 11 route hints or multiple redundant + blinded paths could have led to the router creating invalid `Route`s were + fixed (#3586). + * Corrected the decay logic in `ProbabilisticScorer`'s historical buckets + model. Note that by default historical buckets are only decayed if no new + datapoints have been added for a channel for two weeks (#3562). + * `{Channel,Onion}MessageHandler::peer_disconnected` will now be called if a + different message handler refused connection by returning an `Err` from its + `peer_connected` method (#3580). + * If the counterparty broadcasts a revoked state with pending HTLCs, those + will now be claimed with other outputs which we consider to not be + vulnerable to pinning attacks if they are not yet claimable by our + counterparty, potentially reducing our exposure to pinning attacks (#3564). + + +# 0.1.1 - Jan 28, 2025 - "Onchain Matters" + +## API Updates + * A `ChannelManager::send_payment_with_route` was (re-)added, with semantics + similar to `ChannelManager::send_payment` (rather than like the pre-0.1 + `send_payent_with_route`, #3534). + * `RawBolt11Invoice::{to,from}_raw` were added (#3549). + +## Bug Fixes + * HTLCs which were forwarded where the inbound edge times out within the next + three blocks will have the inbound HTLC failed backwards irrespective of the + status of the outbound HTLC. This avoids the peer force-closing the channel + (and claiming the inbound edge HTLC on-chain) even if we have not yet managed + to claim the outbound edge on chain (#3556). + * On restart, replay of `Event::SpendableOutput`s could have caused + `OutputSweeper` to generate double-spending transactions, making it unable to + claim any delayed claims. This was resolved by retaining old claims for more + than four weeks after they are claimed on-chain to detect replays (#3559). + * Fixed the additional feerate we will pay each time we RBF on-chain claims to + match the Bitcoin Core policy (1 sat/vB) instead of 16 sats/vB (#3457). + * Fixed a cased where a custom `Router` which returns an invalid `Route`, + provided to `ChannelManager`, can result in an outbound payment remaining + pending forever despite no HTLCs being pending (#3531). + +## Security +0.1.1 fixes a denial-of-service vulnerability allowing channel counterparties to +cause force-closure of unrelated channels. + * If a malicious channel counterparty force-closes a channel, broadcasting a + revoked commitment transaction while the channel at closure time included + multiple non-dust forwarded outbound HTLCs with identical payment hashes and + amounts, failure to fail the HTLCs backwards could cause the channels on + which we recieved the corresponding inbound HTLCs to be force-closed. Note + that we'll receive, at a minimum, the malicious counterparty's reserve value + when they broadcast the stale commitment (#3556). Thanks to Matt Morehouse for + reporting this issue. + + # 0.1 - Jan 15, 2025 - "Human Readable Version Numbers" The LDK 0.1 release represents an important milestone for the LDK project. While @@ -178,6 +562,31 @@ funds-lockup denial-of-service issue for anchor channels. * Various denial-of-service issues in the formerly-alpha `lightning-liquidity` crate have been addressed (#3436, #3493). +In total, this release features 198 files changed, 29662 insertions, 11371 +deletions in 444 commits since 0.0.125 from 21 authors, in alphabetical order: + + * Alec Chen + * Andrei + * Arik Sosman + * Carla Kirk-Cohen + * Duncan Dean + * Elias Rohrer + * G8XSU + * Ian Slane + * Jeffrey Czyz + * Leo Nash + * Matt Corallo + * Matt Morehouse + * Matthew Rheaume + * Mirebella + * Valentine Wallace + * Vincenzo Palazzo + * Willem Van Lint + * elsirion + * olegkubrakov + * optout + * shaavan + # 0.0.125 - Oct 14, 2024 - "Delayed Beta Testing" diff --git a/fuzz/src/fs_store.rs b/fuzz/src/fs_store.rs index 0b6e2050bcf..821439f390e 100644 --- a/fuzz/src/fs_store.rs +++ b/fuzz/src/fs_store.rs @@ -78,7 +78,7 @@ async fn do_test_internal(data: &[u8], _out: Out) { Some(b) => b[0], None => break, }; - match v % 12 { + match v % 13 { // Sync write 0 => { let data_value = get_next_data_value(); @@ -96,7 +96,8 @@ async fn do_test_internal(data: &[u8], _out: Out) { }, // Sync remove 1 => { - KVStoreSync::remove(fs_store, primary_namespace, secondary_namespace, key).unwrap(); + KVStoreSync::remove(fs_store, primary_namespace, secondary_namespace, key, false) + .unwrap(); current_data = None; }, @@ -130,8 +131,10 @@ async fn do_test_internal(data: &[u8], _out: Out) { handles.push(handle); }, // Async remove - 10 => { - let fut = KVStore::remove(fs_store, primary_namespace, secondary_namespace, key); + 10 | 11 => { + let lazy = v == 10; + let fut = + KVStore::remove(fs_store, primary_namespace, secondary_namespace, key, lazy); // Already set the current_data, even though writing hasn't finished yet. This supports the call-time // ordering semantics. @@ -141,7 +144,7 @@ async fn do_test_internal(data: &[u8], _out: Out) { handles.push(handle); }, // Join tasks. - 11 => { + 12 => { for handle in handles.drain(..) { let _ = handle.await.unwrap(); } diff --git a/lightning-background-processor/Cargo.toml b/lightning-background-processor/Cargo.toml index 7cece743a32..5744c87d7e9 100644 --- a/lightning-background-processor/Cargo.toml +++ b/lightning-background-processor/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Valentine Wallace "] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" +readme = "../README.md" description = """ Utilities to perform required background tasks for Rust Lightning. """ diff --git a/lightning-background-processor/src/lib.rs b/lightning-background-processor/src/lib.rs index 47a731fa30e..19333c5823a 100644 --- a/lightning-background-processor/src/lib.rs +++ b/lightning-background-processor/src/lib.rs @@ -748,14 +748,14 @@ use futures_util::{dummy_waker, Joiner, OptionalSelector, Selector, SelectorOutp /// # impl lightning::util::persist::KVStoreSync for StoreSync { /// # fn read(&self, primary_namespace: &str, secondary_namespace: &str, key: &str) -> io::Result> { Ok(Vec::new()) } /// # fn write(&self, primary_namespace: &str, secondary_namespace: &str, key: &str, buf: Vec) -> io::Result<()> { Ok(()) } -/// # fn remove(&self, primary_namespace: &str, secondary_namespace: &str, key: &str) -> io::Result<()> { Ok(()) } +/// # fn remove(&self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool) -> io::Result<()> { Ok(()) } /// # fn list(&self, primary_namespace: &str, secondary_namespace: &str) -> io::Result> { Ok(Vec::new()) } /// # } /// # struct Store {} /// # impl lightning::util::persist::KVStore for Store { /// # fn read(&self, primary_namespace: &str, secondary_namespace: &str, key: &str) -> Pin, io::Error>> + 'static + Send>> { todo!() } /// # fn write(&self, primary_namespace: &str, secondary_namespace: &str, key: &str, buf: Vec) -> Pin> + 'static + Send>> { todo!() } -/// # fn remove(&self, primary_namespace: &str, secondary_namespace: &str, key: &str) -> Pin> + 'static + Send>> { todo!() } +/// # fn remove(&self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool) -> Pin> + 'static + Send>> { todo!() } /// # fn list(&self, primary_namespace: &str, secondary_namespace: &str) -> Pin, io::Error>> + 'static + Send>> { todo!() } /// # } /// # use core::time::Duration; @@ -2144,9 +2144,9 @@ mod tests { } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> lightning::io::Result<()> { - self.kv_store.remove(primary_namespace, secondary_namespace, key) + self.kv_store.remove(primary_namespace, secondary_namespace, key, lazy) } fn list( diff --git a/lightning-block-sync/Cargo.toml b/lightning-block-sync/Cargo.toml index 51ff2502489..d9acacf00a8 100644 --- a/lightning-block-sync/Cargo.toml +++ b/lightning-block-sync/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Jeffrey Czyz", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" +readme = "../README.md" description = """ Utilities to fetch the chain data from a block source and feed them into Rust Lightning. """ diff --git a/lightning-custom-message/Cargo.toml b/lightning-custom-message/Cargo.toml index 1f02d7bc732..73186b30ed2 100644 --- a/lightning-custom-message/Cargo.toml +++ b/lightning-custom-message/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Jeffrey Czyz"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" +readme = "../README.md" description = """ Utilities for supporting custom peer-to-peer messages in LDK. """ diff --git a/lightning-dns-resolver/Cargo.toml b/lightning-dns-resolver/Cargo.toml index 9eda19a22fc..6779b16ce7c 100644 --- a/lightning-dns-resolver/Cargo.toml +++ b/lightning-dns-resolver/Cargo.toml @@ -4,6 +4,7 @@ version = "0.3.0-beta1" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" +readme = "../README.md" description = "A crate which implements DNSSEC resolution for lightning clients over bLIP 32 using `tokio` and the `dnssec-prover` crate." edition = "2021" diff --git a/lightning-liquidity/src/lsps2/service.rs b/lightning-liquidity/src/lsps2/service.rs index 9bb3ded58c2..494a4f35cab 100644 --- a/lightning-liquidity/src/lsps2/service.rs +++ b/lightning-liquidity/src/lsps2/service.rs @@ -1834,6 +1834,7 @@ where LIQUIDITY_MANAGER_PERSISTENCE_PRIMARY_NAMESPACE, LSPS2_SERVICE_PERSISTENCE_SECONDARY_NAMESPACE, &key, + true, )); } else { // If the peer got new state, force a re-persist of the current state. diff --git a/lightning-liquidity/src/lsps5/event.rs b/lightning-liquidity/src/lsps5/event.rs index a9c1052250a..c12273808ef 100644 --- a/lightning-liquidity/src/lsps5/event.rs +++ b/lightning-liquidity/src/lsps5/event.rs @@ -15,7 +15,6 @@ use alloc::vec::Vec; use bitcoin::secp256k1::PublicKey; use lightning::impl_writeable_tlv_based_enum; -use lightning::util::hash_tables::HashMap; use super::msgs::LSPS5AppName; use super::msgs::LSPS5Error; @@ -70,7 +69,7 @@ pub enum LSPS5ServiceEvent { /// - `"x-lsps5-timestamp"`: with the timestamp in RFC3339 format (`"YYYY-MM-DDThh:mm:ss.uuuZ"`). /// - `"x-lsps5-signature"`: with the signature of the notification payload, signed using the LSP's node ID. /// Other custom headers may also be included as needed. - headers: HashMap, + headers: Vec<(String, String)>, }, } diff --git a/lightning-liquidity/src/lsps5/msgs.rs b/lightning-liquidity/src/lsps5/msgs.rs index 341dfcddf00..e457c299bfe 100644 --- a/lightning-liquidity/src/lsps5/msgs.rs +++ b/lightning-liquidity/src/lsps5/msgs.rs @@ -541,34 +541,29 @@ pub struct WebhookNotification { } impl WebhookNotification { - /// Create a new webhook notification. - pub fn new(method: WebhookNotificationMethod) -> Self { - Self { method } - } - /// Create a webhook_registered notification. pub fn webhook_registered() -> Self { - Self::new(WebhookNotificationMethod::LSPS5WebhookRegistered) + Self { method: WebhookNotificationMethod::LSPS5WebhookRegistered } } /// Create a payment_incoming notification. pub fn payment_incoming() -> Self { - Self::new(WebhookNotificationMethod::LSPS5PaymentIncoming) + Self { method: WebhookNotificationMethod::LSPS5PaymentIncoming } } /// Create an expiry_soon notification. pub fn expiry_soon(timeout: u32) -> Self { - Self::new(WebhookNotificationMethod::LSPS5ExpirySoon { timeout }) + Self { method: WebhookNotificationMethod::LSPS5ExpirySoon { timeout } } } /// Create a liquidity_management_request notification. pub fn liquidity_management_request() -> Self { - Self::new(WebhookNotificationMethod::LSPS5LiquidityManagementRequest) + Self { method: WebhookNotificationMethod::LSPS5LiquidityManagementRequest } } /// Create an onion_message_incoming notification. pub fn onion_message_incoming() -> Self { - Self::new(WebhookNotificationMethod::LSPS5OnionMessageIncoming) + Self { method: WebhookNotificationMethod::LSPS5OnionMessageIncoming } } } diff --git a/lightning-liquidity/src/lsps5/service.rs b/lightning-liquidity/src/lsps5/service.rs index 1111c682fbc..8b1f0ec70cb 100644 --- a/lightning-liquidity/src/lsps5/service.rs +++ b/lightning-liquidity/src/lsps5/service.rs @@ -297,6 +297,7 @@ where LIQUIDITY_MANAGER_PERSISTENCE_PRIMARY_NAMESPACE, LSPS5_SERVICE_PERSISTENCE_SECONDARY_NAMESPACE, &key, + true, )); } else { // If the peer was re-added, force a re-persist of the current state. @@ -629,12 +630,12 @@ where let signature_hex = self.sign_notification(¬ification, ×tamp)?; - let mut headers: HashMap = [("Content-Type", "application/json")] + let mut headers: Vec<(String, String)> = [("Content-Type", "application/json")] .into_iter() .map(|(k, v)| (k.to_string(), v.to_string())) .collect(); - headers.insert("x-lsps5-timestamp".into(), timestamp.to_rfc3339()); - headers.insert("x-lsps5-signature".into(), signature_hex); + headers.push(("x-lsps5-timestamp".into(), timestamp.to_rfc3339())); + headers.push(("x-lsps5-signature".into(), signature_hex)); event_queue_notifier.enqueue(LSPS5ServiceEvent::SendWebhookNotification { counterparty_node_id, diff --git a/lightning-liquidity/tests/lsps5_integration_tests.rs b/lightning-liquidity/tests/lsps5_integration_tests.rs index 41af2e85eed..80707a60774 100644 --- a/lightning-liquidity/tests/lsps5_integration_tests.rs +++ b/lightning-liquidity/tests/lsps5_integration_tests.rs @@ -17,7 +17,7 @@ use lightning::ln::functional_test_utils::{ }; use lightning::ln::msgs::Init; use lightning::ln::peer_handler::CustomMessageHandler; -use lightning::util::hash_tables::{HashMap, HashSet}; +use lightning::util::hash_tables::HashSet; use lightning::util::test_utils::TestStore; use lightning_liquidity::events::LiquidityEvent; use lightning_liquidity::lsps0::ser::LSPSDateTime; @@ -288,15 +288,20 @@ impl TimeProvider for MockTimeProvider { } } -fn extract_ts_sig(headers: &HashMap) -> (LSPSDateTime, String) { +fn extract_ts_sig(headers: &Vec<(String, String)>) -> (LSPSDateTime, String) { let timestamp = headers - .get("x-lsps5-timestamp") + .iter() + .find_map(|(key, value)| (key == "x-lsps5-timestamp").then(|| value)) .expect("missing x-lsps5-timestamp header") .parse::() .expect("failed to parse x-lsps5-timestamp header"); - let signature = - headers.get("x-lsps5-signature").expect("missing x-lsps5-signature header").to_owned(); + let signature = headers + .iter() + .find(|(key, _)| key == "x-lsps5-signature") + .expect("missing x-lsps5-signature header") + .1 + .clone(); (timestamp, signature) } diff --git a/lightning-net-tokio/Cargo.toml b/lightning-net-tokio/Cargo.toml index 1e6eb7c9552..f41cf889274 100644 --- a/lightning-net-tokio/Cargo.toml +++ b/lightning-net-tokio/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" +readme = "../README.md" description = """ Implementation of the rust-lightning network stack using Tokio. For Rust-Lightning clients which wish to make direct connections to Lightning P2P nodes, this is a simple alternative to implementing the required network stack, especially for those already using Tokio. diff --git a/lightning-persister/Cargo.toml b/lightning-persister/Cargo.toml index 50249f29504..897a70a22fe 100644 --- a/lightning-persister/Cargo.toml +++ b/lightning-persister/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Valentine Wallace", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" +readme = "../README.md" description = """ Utilities for LDK data persistence and retrieval. """ diff --git a/lightning-persister/src/fs_store.rs b/lightning-persister/src/fs_store.rs index 7055f2aa9f9..9b15398d4d1 100644 --- a/lightning-persister/src/fs_store.rs +++ b/lightning-persister/src/fs_store.rs @@ -125,7 +125,7 @@ impl KVStoreSync for FilesystemStore { } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> Result<(), lightning::io::Error> { let path = self.inner.get_checked_dest_file_path( primary_namespace, @@ -134,7 +134,7 @@ impl KVStoreSync for FilesystemStore { "remove", )?; let (inner_lock_ref, version) = self.get_new_version_and_lock_ref(path.clone()); - self.inner.remove_version(inner_lock_ref, path, version) + self.inner.remove_version(inner_lock_ref, path, lazy, version) } fn list( @@ -334,76 +334,81 @@ impl FilesystemStoreInner { } fn remove_version( - &self, inner_lock_ref: Arc>, dest_file_path: PathBuf, version: u64, + &self, inner_lock_ref: Arc>, dest_file_path: PathBuf, lazy: bool, version: u64, ) -> lightning::io::Result<()> { self.execute_locked_write(inner_lock_ref, dest_file_path.clone(), version, || { if !dest_file_path.is_file() { return Ok(()); } - // We try our best to persist the updated metadata to ensure - // atomicity of this call. - #[cfg(not(target_os = "windows"))] - { + if lazy { + // If we're lazy we just call remove and be done with it. fs::remove_file(&dest_file_path)?; + } else { + // If we're not lazy we try our best to persist the updated metadata to ensure + // atomicity of this call. + #[cfg(not(target_os = "windows"))] + { + fs::remove_file(&dest_file_path)?; - let parent_directory = dest_file_path.parent().ok_or_else(|| { - let msg = format!( - "Could not retrieve parent directory of {}.", - dest_file_path.display() - ); - std::io::Error::new(std::io::ErrorKind::InvalidInput, msg) - })?; - let dir_file = fs::OpenOptions::new().read(true).open(parent_directory)?; - // The above call to `fs::remove_file` corresponds to POSIX `unlink`, whose changes - // to the inode might get cached (and hence possibly lost on crash), depending on - // the target platform and file system. - // - // In order to assert we permanently removed the file in question we therefore - // call `fsync` on the parent directory on platforms that support it. - dir_file.sync_all()?; - } + let parent_directory = dest_file_path.parent().ok_or_else(|| { + let msg = format!( + "Could not retrieve parent directory of {}.", + dest_file_path.display() + ); + std::io::Error::new(std::io::ErrorKind::InvalidInput, msg) + })?; + let dir_file = fs::OpenOptions::new().read(true).open(parent_directory)?; + // The above call to `fs::remove_file` corresponds to POSIX `unlink`, whose changes + // to the inode might get cached (and hence possibly lost on crash), depending on + // the target platform and file system. + // + // In order to assert we permanently removed the file in question we therefore + // call `fsync` on the parent directory on platforms that support it. + dir_file.sync_all()?; + } - #[cfg(target_os = "windows")] - { - // Since Windows `DeleteFile` API is not persisted until the last open file handle - // is dropped, and there seemingly is no reliable way to flush the directory - // metadata, we here fall back to use a 'recycling bin' model, i.e., first move the - // file to be deleted to a temporary trash file and remove the latter file - // afterwards. - // - // This should be marginally better, as, according to the documentation, - // `MoveFileExW` APIs should offer stronger persistence guarantees, - // at least if `MOVEFILE_WRITE_THROUGH`/`MOVEFILE_REPLACE_EXISTING` is set. - // However, all this is partially based on assumptions and local experiments, as - // Windows API is horribly underdocumented. - let mut trash_file_path = dest_file_path.clone(); - let trash_file_ext = - format!("{}.trash", self.tmp_file_counter.fetch_add(1, Ordering::AcqRel)); - trash_file_path.set_extension(trash_file_ext); - - call!(unsafe { - windows_sys::Win32::Storage::FileSystem::MoveFileExW( - path_to_windows_str(&dest_file_path).as_ptr(), - path_to_windows_str(&trash_file_path).as_ptr(), - windows_sys::Win32::Storage::FileSystem::MOVEFILE_WRITE_THROUGH + #[cfg(target_os = "windows")] + { + // Since Windows `DeleteFile` API is not persisted until the last open file handle + // is dropped, and there seemingly is no reliable way to flush the directory + // metadata, we here fall back to use a 'recycling bin' model, i.e., first move the + // file to be deleted to a temporary trash file and remove the latter file + // afterwards. + // + // This should be marginally better, as, according to the documentation, + // `MoveFileExW` APIs should offer stronger persistence guarantees, + // at least if `MOVEFILE_WRITE_THROUGH`/`MOVEFILE_REPLACE_EXISTING` is set. + // However, all this is partially based on assumptions and local experiments, as + // Windows API is horribly underdocumented. + let mut trash_file_path = dest_file_path.clone(); + let trash_file_ext = + format!("{}.trash", self.tmp_file_counter.fetch_add(1, Ordering::AcqRel)); + trash_file_path.set_extension(trash_file_ext); + + call!(unsafe { + windows_sys::Win32::Storage::FileSystem::MoveFileExW( + path_to_windows_str(&dest_file_path).as_ptr(), + path_to_windows_str(&trash_file_path).as_ptr(), + windows_sys::Win32::Storage::FileSystem::MOVEFILE_WRITE_THROUGH | windows_sys::Win32::Storage::FileSystem::MOVEFILE_REPLACE_EXISTING, - ) - })?; + ) + })?; + + { + // We fsync the trash file in hopes this will also flush the original's file + // metadata to disk. + let trash_file = fs::OpenOptions::new() + .read(true) + .write(true) + .open(&trash_file_path.clone())?; + trash_file.sync_all()?; + } - { - // We fsync the trash file in hopes this will also flush the original's file - // metadata to disk. - let trash_file = fs::OpenOptions::new() - .read(true) - .write(true) - .open(&trash_file_path.clone())?; - trash_file.sync_all()?; + // We're fine if this remove would fail as the trash file will be cleaned up in + // list eventually. + fs::remove_file(trash_file_path).ok(); } - - // We're fine if this remove would fail as the trash file will be cleaned up in - // list eventually. - fs::remove_file(trash_file_path).ok(); } Ok(()) @@ -503,7 +508,7 @@ impl KVStore for FilesystemStore { } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> Pin> + 'static + Send>> { let this = Arc::clone(&self.inner); let path = match this.get_checked_dest_file_path( @@ -518,11 +523,11 @@ impl KVStore for FilesystemStore { let (inner_lock_ref, version) = self.get_new_version_and_lock_ref(path.clone()); Box::pin(async move { - tokio::task::spawn_blocking(move || this.remove_version(inner_lock_ref, path, version)) - .await - .unwrap_or_else(|e| { - Err(lightning::io::Error::new(lightning::io::ErrorKind::Other, e)) - }) + tokio::task::spawn_blocking(move || { + this.remove_version(inner_lock_ref, path, lazy, version) + }) + .await + .unwrap_or_else(|e| Err(lightning::io::Error::new(lightning::io::ErrorKind::Other, e))) }) } @@ -767,7 +772,7 @@ mod tests { let fut1 = async_fs_store.write(primary_namespace, secondary_namespace, key, data1); assert_eq!(fs_store.state_size(), 1); - let fut2 = async_fs_store.remove(primary_namespace, secondary_namespace, key); + let fut2 = async_fs_store.remove(primary_namespace, secondary_namespace, key, false); assert_eq!(fs_store.state_size(), 1); let fut3 = async_fs_store.write(primary_namespace, secondary_namespace, key, data2.clone()); @@ -794,7 +799,7 @@ mod tests { assert_eq!(data2, &*read_data); // Test remove. - async_fs_store.remove(primary_namespace, secondary_namespace, key).await.unwrap(); + async_fs_store.remove(primary_namespace, secondary_namespace, key, false).await.unwrap(); let listed_keys = async_fs_store.list(primary_namespace, secondary_namespace).await.unwrap(); diff --git a/lightning-persister/src/test_utils.rs b/lightning-persister/src/test_utils.rs index 0ef0242c419..636967a6937 100644 --- a/lightning-persister/src/test_utils.rs +++ b/lightning-persister/src/test_utils.rs @@ -40,7 +40,7 @@ pub(crate) fn do_read_write_remove_list_persist( let read_data = kv_store.read(primary_namespace, secondary_namespace, key).unwrap(); assert_eq!(data, &*read_data); - kv_store.remove(primary_namespace, secondary_namespace, key).unwrap(); + kv_store.remove(primary_namespace, secondary_namespace, key, false).unwrap(); let listed_keys = kv_store.list(primary_namespace, secondary_namespace).unwrap(); assert_eq!(listed_keys.len(), 0); @@ -57,7 +57,7 @@ pub(crate) fn do_read_write_remove_list_persist( let read_data = kv_store.read(&max_chars, &max_chars, &max_chars).unwrap(); assert_eq!(data, &*read_data); - kv_store.remove(&max_chars, &max_chars, &max_chars).unwrap(); + kv_store.remove(&max_chars, &max_chars, &max_chars, false).unwrap(); let listed_keys = kv_store.list(&max_chars, &max_chars).unwrap(); assert_eq!(listed_keys.len(), 0); diff --git a/lightning-transaction-sync/Cargo.toml b/lightning-transaction-sync/Cargo.toml index 6dd34e1f22f..e1d17452b49 100644 --- a/lightning-transaction-sync/Cargo.toml +++ b/lightning-transaction-sync/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Elias Rohrer"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" +readme = "../README.md" description = """ Utilities for syncing LDK via the transaction-based `Confirm` interface. """ diff --git a/lightning-types/Cargo.toml b/lightning-types/Cargo.toml index 73fc7ff62f4..c81e40071bb 100644 --- a/lightning-types/Cargo.toml +++ b/lightning-types/Cargo.toml @@ -4,6 +4,7 @@ version = "0.3.0-beta1" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" +readme = "../README.md" description = """ Basic types which are used in the lightning network """ diff --git a/lightning/Cargo.toml b/lightning/Cargo.toml index adb76a0a453..71d284adde4 100644 --- a/lightning/Cargo.toml +++ b/lightning/Cargo.toml @@ -4,6 +4,7 @@ version = "0.2.0-beta1" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" +readme = "../README.md" description = """ A Complete Bitcoin Lightning Library in Rust. Handles the core functionality of the Lightning Network, allowing clients to implement custom wallet, chain interactions, storage and network logic without enforcing a specific runtime. diff --git a/lightning/src/blinded_path/payment.rs b/lightning/src/blinded_path/payment.rs index 4ae10f75961..37d7a1dba7d 100644 --- a/lightning/src/blinded_path/payment.rs +++ b/lightning/src/blinded_path/payment.rs @@ -869,7 +869,7 @@ mod tests { // Taken from the spec example for aggregating blinded payment info. See // https://github.com/lightning/bolts/blob/master/proposals/route-blinding.md#blinded-payments let dummy_pk = PublicKey::from_slice(&[2; 33]).unwrap(); - let intermediate_nodes = vec![ + let intermediate_nodes = [ PaymentForwardNode { node_id: dummy_pk, tlvs: ForwardTlvs { @@ -944,7 +944,7 @@ mod tests { // If no hops charge fees, the htlc_minimum_msat should just be the maximum htlc_minimum_msat // along the path. let dummy_pk = PublicKey::from_slice(&[2; 33]).unwrap(); - let intermediate_nodes = vec![ + let intermediate_nodes = [ PaymentForwardNode { node_id: dummy_pk, tlvs: ForwardTlvs { @@ -1003,7 +1003,7 @@ mod tests { // Create a path with varying fees and htlc_mins, and make sure htlc_minimum_msat ends up as the // max (htlc_min - following_fees) along the path. let dummy_pk = PublicKey::from_slice(&[2; 33]).unwrap(); - let intermediate_nodes = vec![ + let intermediate_nodes = [ PaymentForwardNode { node_id: dummy_pk, tlvs: ForwardTlvs { @@ -1072,7 +1072,7 @@ mod tests { // Create a path with varying fees and `htlc_maximum_msat`s, and make sure the aggregated max // htlc ends up as the min (htlc_max - following_fees) along the path. let dummy_pk = PublicKey::from_slice(&[2; 33]).unwrap(); - let intermediate_nodes = vec![ + let intermediate_nodes = [ PaymentForwardNode { node_id: dummy_pk, tlvs: ForwardTlvs { diff --git a/lightning/src/crypto/chacha20.rs b/lightning/src/crypto/chacha20.rs index cbe3e4e1062..5b0c16c933f 100644 --- a/lightning/src/crypto/chacha20.rs +++ b/lightning/src/crypto/chacha20.rs @@ -354,7 +354,7 @@ mod test { keystream: Vec, } // taken from http://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04 - let test_vectors = vec![ + let test_vectors = [ TestVector { key: [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, @@ -464,7 +464,7 @@ mod test { keystream: Vec, } // taken from http://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04 - let test_vectors = vec![ + let test_vectors = [ TestVector { key: [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, diff --git a/lightning/src/events/mod.rs b/lightning/src/events/mod.rs index 1b9850c442e..9f7e4c5620d 100644 --- a/lightning/src/events/mod.rs +++ b/lightning/src/events/mod.rs @@ -1690,6 +1690,9 @@ pub enum Event { /// The offline peer should be awoken if possible on receipt of this event, such as via the LSPS5 /// protocol. /// + /// Once they connect, you should handle the generated [`Event::OnionMessagePeerConnected`] and + /// provide the stored message. + /// /// # Failure Behavior and Persistence /// This event will eventually be replayed after failures-to-handle (i.e., the event handler /// returning `Err(ReplayEvent ())`), but won't be persisted across restarts. @@ -1701,16 +1704,18 @@ pub enum Event { /// The onion message intended to be forwarded to `peer_node_id`. message: msgs::OnionMessage, }, - /// Indicates that an onion message supporting peer has come online and it may - /// be time to forward any onion messages that were previously intercepted for - /// them. This event will only be generated if the `OnionMessenger` was - /// initialized with + /// Indicates that an onion message supporting peer has come online and any messages previously + /// stored for them (from [`Event::OnionMessageIntercepted`]s) should be forwarded to them by + /// calling [`OnionMessenger::forward_onion_message`]. + /// + /// This event will only be generated if the `OnionMessenger` was initialized with /// [`OnionMessenger::new_with_offline_peer_interception`], see its docs. /// /// # Failure Behavior and Persistence /// This event will eventually be replayed after failures-to-handle (i.e., the event handler /// returning `Err(ReplayEvent ())`), but won't be persisted across restarts. /// + /// [`OnionMessenger::forward_onion_message`]: crate::onion_message::messenger::OnionMessenger::forward_onion_message /// [`OnionMessenger::new_with_offline_peer_interception`]: crate::onion_message::messenger::OnionMessenger::new_with_offline_peer_interception OnionMessagePeerConnected { /// The node id of the peer we just connected to, who advertises support for diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs index 4d96f8ad7da..632d897043e 100644 --- a/lightning/src/ln/channelmanager.rs +++ b/lightning/src/ln/channelmanager.rs @@ -4683,6 +4683,9 @@ where /// emitted. At this point, any inputs contributed to the splice can only be re-spent if an /// [`Event::DiscardFunding`] is seen. /// + /// After initial signatures have been exchanged, [`Event::FundingTransactionReadyForSigning`] + /// will be generated and [`ChannelManager::funding_transaction_signed`] should be called. + /// /// If any failures occur while negotiating the funding transaction, an [`Event::SpliceFailed`] /// will be emitted. Any contributed inputs no longer used will be included here and thus can /// be re-spent. diff --git a/lightning/src/ln/functional_tests.rs b/lightning/src/ln/functional_tests.rs index 876324737a6..c161a9664c0 100644 --- a/lightning/src/ln/functional_tests.rs +++ b/lightning/src/ln/functional_tests.rs @@ -999,7 +999,7 @@ fn do_test_forming_justice_tx_from_monitor_updates(broadcast_initial_commitment: let chanmon_cfgs = create_chanmon_cfgs(2); let destination_script0 = chanmon_cfgs[0].keys_manager.get_destination_script([0; 32]).unwrap(); let destination_script1 = chanmon_cfgs[1].keys_manager.get_destination_script([0; 32]).unwrap(); - let persisters = vec![ + let persisters = [ WatchtowerPersister::new(destination_script0), WatchtowerPersister::new(destination_script1), ]; diff --git a/lightning/src/ln/interactivetxs.rs b/lightning/src/ln/interactivetxs.rs index 43feccd53d3..d1a985f30fe 100644 --- a/lightning/src/ln/interactivetxs.rs +++ b/lightning/src/ln/interactivetxs.rs @@ -1492,12 +1492,6 @@ enum StateMachine { NegotiationAborted(NegotiationAborted), } -impl Default for StateMachine { - fn default() -> Self { - Self::Indeterminate - } -} - // The `StateMachine` internally executes the actual transition between two states and keeps // track of the current state. This macro defines _how_ those state transitions happen to // update the internal state. @@ -1930,7 +1924,8 @@ impl InteractiveTxMessageSend { // This macro executes a state machine transition based on a provided action. macro_rules! do_state_transition { ($self: ident, $transition: ident, $msg: expr) => {{ - let state_machine = core::mem::take(&mut $self.state_machine); + let mut state_machine = StateMachine::Indeterminate; + core::mem::swap(&mut state_machine, &mut $self.state_machine); $self.state_machine = state_machine.$transition($msg); match &$self.state_machine { StateMachine::NegotiationAborted(state) => Err(state.0.clone()), diff --git a/lightning/src/offers/invoice.rs b/lightning/src/offers/invoice.rs index 9751f52b046..b4ae407c4c0 100644 --- a/lightning/src/offers/invoice.rs +++ b/lightning/src/offers/invoice.rs @@ -3012,7 +3012,7 @@ mod tests { let secp_ctx = Secp256k1::new(); let payment_id = PaymentId([1; 32]); - let paths = vec![ + let paths = [ BlindedMessagePath::from_blinded_path( pubkey(40), pubkey(41), diff --git a/lightning/src/offers/refund.rs b/lightning/src/offers/refund.rs index 87d7a845b53..dd2c3e2a92e 100644 --- a/lightning/src/offers/refund.rs +++ b/lightning/src/offers/refund.rs @@ -1587,7 +1587,7 @@ mod tests { #[test] fn parses_refund_with_optional_fields() { let past_expiry = Duration::from_secs(0); - let paths = vec![ + let paths = [ BlindedMessagePath::from_blinded_path( pubkey(40), pubkey(41), diff --git a/lightning/src/routing/router.rs b/lightning/src/routing/router.rs index e3443b5e45a..77396c783e3 100644 --- a/lightning/src/routing/router.rs +++ b/lightning/src/routing/router.rs @@ -4054,7 +4054,7 @@ mod tests { // Simple route to 2 via 1 - let our_chans = vec![get_channel_details(Some(2), our_id, InitFeatures::from_le_bytes(vec![0b11]), 100000)]; + let our_chans = [get_channel_details(Some(2), our_id, InitFeatures::from_le_bytes(vec![0b11]), 100000)]; let route_params = RouteParameters::from_payment_params_and_value(payment_params, 100); if let Err(err) = get_route(&our_id, @@ -4473,7 +4473,7 @@ mod tests { } else { panic!(); } // If we specify a channel to node7, that overrides our local channel view and that gets used - let our_chans = vec![get_channel_details(Some(42), nodes[7].clone(), + let our_chans = [get_channel_details(Some(42), nodes[7].clone(), InitFeatures::from_le_bytes(vec![0b11]), 250_000_000)]; route_params.payment_params.max_path_length = 2; let route = get_route(&our_id, &route_params, &network_graph.read_only(), @@ -4521,7 +4521,7 @@ mod tests { } else { panic!(); } // If we specify a channel to node7, that overrides our local channel view and that gets used - let our_chans = vec![get_channel_details(Some(42), nodes[7].clone(), + let our_chans = [get_channel_details(Some(42), nodes[7].clone(), InitFeatures::from_le_bytes(vec![0b11]), 250_000_000)]; let route = get_route(&our_id, &route_params, &network_graph.read_only(), Some(&our_chans.iter().collect::>()), Arc::clone(&logger), &scorer, @@ -4586,7 +4586,7 @@ mod tests { // If we specify a channel to node7, that overrides our local channel view and that gets used let payment_params = PaymentParameters::from_node_id(nodes[2], 42); let route_params = RouteParameters::from_payment_params_and_value(payment_params, 100); - let our_chans = vec![get_channel_details(Some(42), nodes[7].clone(), + let our_chans = [get_channel_details(Some(42), nodes[7].clone(), InitFeatures::from_le_bytes(vec![0b11]), 250_000_000)]; let route = get_route(&our_id, &route_params, &network_graph.read_only(), Some(&our_chans.iter().collect::>()), Arc::clone(&logger), &scorer, @@ -5137,7 +5137,7 @@ mod tests { let random_seed_bytes = [42; 32]; // Simple test with outbound channel to 4 to test that last_hops and first_hops connect - let our_chans = vec![get_channel_details(Some(42), nodes[3].clone(), InitFeatures::from_le_bytes(vec![0b11]), 250_000_000)]; + let our_chans = [get_channel_details(Some(42), nodes[3].clone(), InitFeatures::from_le_bytes(vec![0b11]), 250_000_000)]; let mut last_hops = last_hops(&nodes); let payment_params = PaymentParameters::from_node_id(nodes[6], 42) .with_route_hints(last_hops.clone()).unwrap(); @@ -5265,7 +5265,7 @@ mod tests { htlc_maximum_msat: last_hop_htlc_max, }]); let payment_params = PaymentParameters::from_node_id(target_node_id, 42).with_route_hints(vec![last_hops]).unwrap(); - let our_chans = vec![get_channel_details(Some(42), middle_node_id, InitFeatures::from_le_bytes(vec![0b11]), outbound_capacity_msat)]; + let our_chans = [get_channel_details(Some(42), middle_node_id, InitFeatures::from_le_bytes(vec![0b11]), outbound_capacity_msat)]; let scorer = ln_test_utils::TestScorer::new(); let random_seed_bytes = [42; 32]; let logger = ln_test_utils::TestLogger::new(); @@ -5442,7 +5442,7 @@ mod tests { }); // Now, limit the first_hop by the next_outbound_htlc_limit_msat of 200_000 sats. - let our_chans = vec![get_channel_details(Some(42), nodes[0].clone(), InitFeatures::from_le_bytes(vec![0b11]), 200_000_000)]; + let our_chans = [get_channel_details(Some(42), nodes[0].clone(), InitFeatures::from_le_bytes(vec![0b11]), 200_000_000)]; { // Attempt to route more than available results in a failure. @@ -7827,7 +7827,7 @@ mod tests { let our_node_id = ln_test_utils::pubkey(42); let intermed_node_id = ln_test_utils::pubkey(43); - let first_hop = vec![get_channel_details(Some(42), intermed_node_id, InitFeatures::from_le_bytes(vec![0b11]), 10_000_000)]; + let first_hop = [get_channel_details(Some(42), intermed_node_id, InitFeatures::from_le_bytes(vec![0b11]), 10_000_000)]; let amt_msat = 900_000; let max_htlc_msat = 500_000; @@ -7874,7 +7874,7 @@ mod tests { // Re-run but with two first hop channels connected to the same route hint peers that must be // split between. - let first_hops = vec![ + let first_hops = [ get_channel_details(Some(42), intermed_node_id, InitFeatures::from_le_bytes(vec![0b11]), amt_msat - 10), get_channel_details(Some(43), intermed_node_id, InitFeatures::from_le_bytes(vec![0b11]), amt_msat - 10), ]; @@ -8286,8 +8286,9 @@ mod tests { fee_proportional_millionths: 0, excess_data: Vec::new() }); - let first_hops = vec![ - get_channel_details(Some(1), nodes[1], InitFeatures::from_le_bytes(vec![0b11]), 10_000_000)]; + let first_hops = [ + get_channel_details(Some(1), nodes[1], InitFeatures::from_le_bytes(vec![0b11]), 10_000_000) + ]; let blinded_payinfo = BlindedPayInfo { fee_base_msat: 1000, @@ -8347,9 +8348,10 @@ mod tests { // Values are taken from the fuzz input that uncovered this panic. let amt_msat = 21_7020_5185_1403_2640; let (_, _, _, nodes) = get_nodes(&secp_ctx); - let first_hops = vec![ + let first_hops = [ get_channel_details(Some(1), nodes[1], channelmanager::provided_init_features(&config), - 18446744073709551615)]; + 18446744073709551615), + ]; let blinded_payinfo = BlindedPayInfo { fee_base_msat: 5046_2720, @@ -8493,7 +8495,7 @@ mod tests { let amt_msat = 7_4009_8048; let (_, our_id, _, nodes) = get_nodes(&secp_ctx); let first_hop_outbound_capacity = 2_7345_2000; - let first_hops = vec![get_channel_details( + let first_hops = [get_channel_details( Some(200), nodes[0], channelmanager::provided_init_features(&config), first_hop_outbound_capacity )]; @@ -8566,7 +8568,7 @@ mod tests { // Values are taken from the fuzz input that uncovered this panic. let amt_msat = 52_4288; let (_, our_id, _, nodes) = get_nodes(&secp_ctx); - let first_hops = vec![get_channel_details( + let first_hops = [get_channel_details( Some(161), nodes[0], channelmanager::provided_init_features(&config), 486_4000 ), get_channel_details( Some(122), nodes[0], channelmanager::provided_init_features(&config), 179_5000 @@ -8641,7 +8643,7 @@ mod tests { // Values are taken from the fuzz input that uncovered this panic. let amt_msat = 7_4009_8048; let (_, our_id, privkeys, nodes) = get_nodes(&secp_ctx); - let first_hops = vec![get_channel_details( + let first_hops = [get_channel_details( Some(200), nodes[0], channelmanager::provided_init_features(&config), 2_7345_2000 )]; @@ -8705,7 +8707,7 @@ mod tests { // Values are taken from the fuzz input that uncovered this panic. let amt_msat = 562_0000; let (_, our_id, _, nodes) = get_nodes(&secp_ctx); - let first_hops = vec![ + let first_hops = [ get_channel_details( Some(83), nodes[0], channelmanager::provided_init_features(&config), 2199_0000, ), @@ -8849,9 +8851,8 @@ mod tests { // First create an insufficient first hop for channel with SCID 1 and check we'd use the // route hint. - let first_hop = get_channel_details(Some(1), nodes[0], - channelmanager::provided_init_features(&config), 999_999); - let first_hops = vec![first_hop]; + let first_hops = [get_channel_details(Some(1), nodes[0], + channelmanager::provided_init_features(&config), 999_999)]; let route = get_route(&our_node_id, &route_params.clone(), &network_graph.read_only(), Some(&first_hops.iter().collect::>()), Arc::clone(&logger), &scorer, @@ -8867,7 +8868,7 @@ mod tests { // for a first hop channel. let mut first_hop = get_channel_details(Some(1), nodes[0], channelmanager::provided_init_features(&config), 999_999); first_hop.outbound_scid_alias = Some(44); - let first_hops = vec![first_hop]; + let first_hops = [first_hop]; let route_res = get_route(&our_node_id, &route_params.clone(), &network_graph.read_only(), Some(&first_hops.iter().collect::>()), Arc::clone(&logger), &scorer, @@ -8879,7 +8880,7 @@ mod tests { let mut first_hop = get_channel_details(Some(1), nodes[0], channelmanager::provided_init_features(&config), 10_000_000); first_hop.outbound_scid_alias = Some(44); - let first_hops = vec![first_hop]; + let first_hops = [first_hop]; let route = get_route(&our_node_id, &route_params.clone(), &network_graph.read_only(), Some(&first_hops.iter().collect::>()), Arc::clone(&logger), &scorer, @@ -9002,8 +9003,9 @@ mod tests { let amt_msat = 1_000_000; let dest_node_id = nodes[1]; - let first_hop = get_channel_details(Some(1), nodes[0], channelmanager::provided_init_features(&config), 10_000_000); - let first_hops = vec![first_hop]; + let first_hops = [ + get_channel_details(Some(1), nodes[0], channelmanager::provided_init_features(&config), 10_000_000), + ]; let route_hint = RouteHint(vec![RouteHintHop { src_node_id: our_node_id, diff --git a/lightning/src/util/anchor_channel_reserves.rs b/lightning/src/util/anchor_channel_reserves.rs index ebae770fb8a..e50e103211f 100644 --- a/lightning/src/util/anchor_channel_reserves.rs +++ b/lightning/src/util/anchor_channel_reserves.rs @@ -290,8 +290,8 @@ pub fn can_support_additional_anchor_channel< >, >, >( - context: &AnchorChannelReserveContext, utxos: &[Utxo], a_channel_manager: &AChannelManagerRef, - chain_monitor: &ChainMonitorRef, + context: &AnchorChannelReserveContext, utxos: &[Utxo], a_channel_manager: AChannelManagerRef, + chain_monitor: ChainMonitorRef, ) -> bool where AChannelManagerRef::Target: AChannelManager, diff --git a/lightning/src/util/config.rs b/lightning/src/util/config.rs index 8451ac09f23..0856dc96394 100644 --- a/lightning/src/util/config.rs +++ b/lightning/src/util/config.rs @@ -213,6 +213,11 @@ pub struct ChannelHandshakeConfig { /// back to a `anchors_zero_fee_htlc` (if [`Self::negotiate_anchors_zero_fee_htlc_tx`] /// is set) or `static_remote_key` channel. /// + /// For a force-close transaction to reach miners and get confirmed, + /// zero-fee commitment channels require a path from your Bitcoin node to miners that + /// relays TRUC transactions (BIP 431), P2A outputs, and Ephemeral Dust. Currently, only + /// nodes running Bitcoin Core v29 and above relay transactions with these features. + /// /// Default value: `false` (This value is likely to change to `true` in the future.) /// /// [TRUC]: (https://bitcoinops.org/en/topics/version-3-transaction-relay/) diff --git a/lightning/src/util/persist.rs b/lightning/src/util/persist.rs index 434e16d629e..e75f35e65cd 100644 --- a/lightning/src/util/persist.rs +++ b/lightning/src/util/persist.rs @@ -129,7 +129,7 @@ pub trait KVStoreSync { ) -> Result<(), io::Error>; /// A synchronous version of the [`KVStore::remove`] method. fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> Result<(), io::Error>; /// A synchronous version of the [`KVStore::list`] method. fn list( @@ -175,9 +175,9 @@ where } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> AsyncResult<'static, (), io::Error> { - let res = self.0.remove(primary_namespace, secondary_namespace, key); + let res = self.0.remove(primary_namespace, secondary_namespace, key, lazy); Box::pin(async move { res }) } @@ -245,11 +245,26 @@ pub trait KVStore { ) -> AsyncResult<'static, (), io::Error>; /// Removes any data that had previously been persisted under the given `key`. /// + /// If the `lazy` flag is set to `true`, the backend implementation might choose to lazily + /// remove the given `key` at some point in time after the method returns, e.g., as part of an + /// eventual batch deletion of multiple keys. As a consequence, subsequent calls to + /// [`KVStoreSync::list`] might include the removed key until the changes are actually persisted. + /// + /// Note that while setting the `lazy` flag reduces the I/O burden of multiple subsequent + /// `remove` calls, it also influences the atomicity guarantees as lazy `remove`s could + /// potentially get lost on crash after the method returns. Therefore, this flag should only be + /// set for `remove` operations that can be safely replayed at a later time. + /// + /// All removal operations must complete in a consistent total order with [`Self::write`]s + /// to the same key. Whether a removal operation is `lazy` or not, [`Self::write`] operations + /// to the same key which occur before a removal completes must cancel/overwrite the pending + /// removal. + /// /// Returns successfully if no data will be stored for the given `primary_namespace`, /// `secondary_namespace`, and `key`, independently of whether it was present before its /// invokation or not. fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> AsyncResult<'static, (), io::Error>; /// Returns a list of keys that are stored under the given `secondary_namespace` in /// `primary_namespace`. @@ -353,6 +368,7 @@ impl Persist(future: F) -> F::Output { /// /// Stale updates are pruned when the consolidation threshold is reached according to `maximum_pending_updates`. /// Monitor updates in the range between the latest `update_id` and `update_id - maximum_pending_updates` -/// are deleted. If you have many stale updates stored and would like to get rid of them, consider using the +/// are deleted. +/// The `lazy` flag is used on the [`KVStoreSync::remove`] method, so there are no guarantees that the deletions +/// will complete. However, stale updates are not a problem for data integrity, since updates are +/// only read that are higher than the stored [`ChannelMonitor`]'s `update_id`. +/// +/// If you have many stale updates stored (such as after a crash with pending lazy deletes), and +/// would like to get rid of them, consider using the /// [`MonitorUpdatingPersister::cleanup_stale_updates`] function. pub struct MonitorUpdatingPersister( MonitorUpdatingPersisterAsync, PanicingSpawner, L, ES, SP, BI, FE>, @@ -605,9 +627,10 @@ where /// /// This function works by first listing all monitors, and then for each of them, listing all /// updates. The updates that have an `update_id` less than or equal to than the stored monitor - /// are deleted. - pub fn cleanup_stale_updates(&self) -> Result<(), io::Error> { - poll_sync_future(self.0.cleanup_stale_updates()) + /// are deleted. The deletion can either be lazy or non-lazy based on the `lazy` flag; this will + /// be passed to [`KVStoreSync::remove`]. + pub fn cleanup_stale_updates(&self, lazy: bool) -> Result<(), io::Error> { + poll_sync_future(self.0.cleanup_stale_updates(lazy)) } } @@ -824,9 +847,10 @@ where /// /// This function works by first listing all monitors, and then for each of them, listing all /// updates. The updates that have an `update_id` less than or equal to than the stored monitor - /// are deleted. - pub async fn cleanup_stale_updates(&self) -> Result<(), io::Error> { - self.0.cleanup_stale_updates().await + /// are deleted. The deletion can either be lazy or non-lazy based on the `lazy` flag; this will + /// be passed to [`KVStoreSync::remove`]. + pub async fn cleanup_stale_updates(&self, lazy: bool) -> Result<(), io::Error> { + self.0.cleanup_stale_updates(lazy).await } } @@ -931,9 +955,12 @@ where Some(res) => Ok(res), None => Err(io::Error::new( io::ErrorKind::InvalidData, - "ChannelMonitor was stale, with no updates since LDK 0.0.118. \ + format!( + "ChannelMonitor {} was stale, with no updates since LDK 0.0.118. \ It cannot be read by modern versions of LDK, though also does not contain any funds left to sweep. \ You should manually delete it instead", + monitor_key, + ), )), } } @@ -1049,7 +1076,7 @@ where }) } - async fn cleanup_stale_updates(&self) -> Result<(), io::Error> { + async fn cleanup_stale_updates(&self, lazy: bool) -> Result<(), io::Error> { let primary = CHANNEL_MONITOR_PERSISTENCE_PRIMARY_NAMESPACE; let secondary = CHANNEL_MONITOR_PERSISTENCE_SECONDARY_NAMESPACE; let monitor_keys = self.kv_store.list(primary, secondary).await?; @@ -1058,7 +1085,8 @@ where let maybe_monitor = self.maybe_read_monitor(&monitor_name, &monitor_key).await?; if let Some((_, current_monitor)) = maybe_monitor { let latest_update_id = current_monitor.get_latest_update_id(); - self.cleanup_stale_updates_for_monitor_to(&monitor_key, latest_update_id).await?; + self.cleanup_stale_updates_for_monitor_to(&monitor_key, latest_update_id, lazy) + .await?; } else { // TODO: Also clean up super stale monitors (created pre-0.0.110 and last updated // pre-0.0.119). @@ -1068,7 +1096,7 @@ where } async fn cleanup_stale_updates_for_monitor_to( - &self, monitor_key: &str, latest_update_id: u64, + &self, monitor_key: &str, latest_update_id: u64, lazy: bool, ) -> Result<(), io::Error> { let primary = CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE; let updates = self.kv_store.list(primary, monitor_key).await?; @@ -1076,7 +1104,7 @@ where let update_name = UpdateName::new(update)?; // if the update_id is lower than the stored monitor, delete if update_name.0 <= latest_update_id { - self.kv_store.remove(primary, monitor_key, update_name.as_str()).await?; + self.kv_store.remove(primary, monitor_key, update_name.as_str(), lazy).await?; } } Ok(()) @@ -1152,6 +1180,7 @@ where self.cleanup_stale_updates_for_monitor_to( &monitor_key, latest_update_id, + true, ) .await?; } else { @@ -1202,7 +1231,7 @@ where }; let primary = CHANNEL_MONITOR_PERSISTENCE_PRIMARY_NAMESPACE; let secondary = CHANNEL_MONITOR_PERSISTENCE_SECONDARY_NAMESPACE; - let _ = self.kv_store.remove(primary, secondary, &monitor_key).await; + let _ = self.kv_store.remove(primary, secondary, &monitor_key, true).await; } // Cleans up monitor updates for given monitor in range `start..=end`. @@ -1211,7 +1240,7 @@ where for update_id in start..=end { let update_name = UpdateName::from(update_id); let primary = CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE; - let res = self.kv_store.remove(primary, &monitor_key, update_name.as_str()).await; + let res = self.kv_store.remove(primary, &monitor_key, update_name.as_str(), true).await; if let Err(e) = res { log_error!( self.logger, @@ -1800,7 +1829,7 @@ mod tests { .unwrap(); // Do the stale update cleanup - persister_0.cleanup_stale_updates().unwrap(); + persister_0.cleanup_stale_updates(false).unwrap(); // Confirm the stale update is unreadable/gone assert!(KVStoreSync::read( diff --git a/lightning/src/util/test_utils.rs b/lightning/src/util/test_utils.rs index 1931287ab6a..54ed67d4714 100644 --- a/lightning/src/util/test_utils.rs +++ b/lightning/src/util/test_utils.rs @@ -966,7 +966,7 @@ impl TestStore { } fn remove_internal( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, _lazy: bool, ) -> io::Result<()> { if self.read_only { return Err(io::Error::new( @@ -1030,9 +1030,9 @@ impl KVStore for TestStore { Box::pin(OneShotChannel(future)) } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> AsyncResult<'static, (), io::Error> { - let res = self.remove_internal(&primary_namespace, &secondary_namespace, &key); + let res = self.remove_internal(&primary_namespace, &secondary_namespace, &key, lazy); Box::pin(async move { res }) } fn list( @@ -1080,9 +1080,9 @@ impl KVStoreSync for TestStore { } fn remove( - &self, primary_namespace: &str, secondary_namespace: &str, key: &str, + &self, primary_namespace: &str, secondary_namespace: &str, key: &str, lazy: bool, ) -> io::Result<()> { - self.remove_internal(primary_namespace, secondary_namespace, key) + self.remove_internal(primary_namespace, secondary_namespace, key, lazy) } fn list(&self, primary_namespace: &str, secondary_namespace: &str) -> io::Result> { diff --git a/pending_changelog/3531-buggy-router-leak.txt b/pending_changelog/3531-buggy-router-leak.txt deleted file mode 100644 index 72714aa8a8b..00000000000 --- a/pending_changelog/3531-buggy-router-leak.txt +++ /dev/null @@ -1,4 +0,0 @@ -## Bug Fixes - -* Fixed a rare case where a custom router returning a buggy route could result in holding onto a - pending payment forever and in some cases failing to generate a PaymentFailed event (#3531). diff --git a/pending_changelog/3604-upgrades-prior-to-113-not-supported.txt b/pending_changelog/3604-upgrades-prior-to-113-not-supported.txt deleted file mode 100644 index 94d622cda23..00000000000 --- a/pending_changelog/3604-upgrades-prior-to-113-not-supported.txt +++ /dev/null @@ -1,2 +0,0 @@ -## API Updates (0.2) - * Upgrading from versions prior to 0.0.113 is no longer supported (#3604). diff --git a/pending_changelog/3638-0.2-upgrade-without-counterparty-node-id-in-monitor-not-supported.txt b/pending_changelog/3638-0.2-upgrade-without-counterparty-node-id-in-monitor-not-supported.txt deleted file mode 100644 index ac5d1f93216..00000000000 --- a/pending_changelog/3638-0.2-upgrade-without-counterparty-node-id-in-monitor-not-supported.txt +++ /dev/null @@ -1,5 +0,0 @@ -## API Updates (0.2) - -* Upgrading to v0.2.0 is not allowed when a `ChannelMonitor` that does not track the channel's - `counterparty_node_id` is loaded. Upgrade to a v0.1.* release first and either send/route a - payment over the channel, or close it, before upgrading to v0.2.0. diff --git a/pending_changelog/3664-downgrades-to-0.0.115-not-supported.txt b/pending_changelog/3664-downgrades-to-0.0.115-not-supported.txt deleted file mode 100644 index 9bb11831048..00000000000 --- a/pending_changelog/3664-downgrades-to-0.0.115-not-supported.txt +++ /dev/null @@ -1,3 +0,0 @@ -# API Updates (0.2) - -Downgrading to v0.0.115 is no longer supported if a node has an HTLC routed/settled while running v0.2 or later. diff --git a/pending_changelog/3678-channel-type-check.txt b/pending_changelog/3678-channel-type-check.txt deleted file mode 100644 index 39efe7cfe71..00000000000 --- a/pending_changelog/3678-channel-type-check.txt +++ /dev/null @@ -1,5 +0,0 @@ -## API Updates (0.2) - -* Upgrading to v0.2.0 from a version prior to 0.0.116 is not allowed when a channel was opened with - either `scid_privacy` or `zero_conf` included in its channel type. Upgrade to v0.0.116 first - before upgrading to v0.2.0. diff --git a/pending_changelog/3700-reason-in-handling-failed.txt b/pending_changelog/3700-reason-in-handling-failed.txt deleted file mode 100644 index 5a8643554df..00000000000 --- a/pending_changelog/3700-reason-in-handling-failed.txt +++ /dev/null @@ -1,8 +0,0 @@ -## API Updates (0.2) - -* The `HTLCHandlingFailed` event was updated to include a `failure_reason` field that provides - additional information about why the HTLC was failed. -* The `failed_next_destination` field, which previously contained a combination of failure - and HTLC-related information, was renamed to `failure_type` and the `UnknownNextHop` - variant was deprecated. This type will be represented as `InvalidForward` for nodes - downgrading from v0.2.0. diff --git a/pending_changelog/3881.txt b/pending_changelog/3881.txt deleted file mode 100644 index ae9631f8ebc..00000000000 --- a/pending_changelog/3881.txt +++ /dev/null @@ -1,6 +0,0 @@ -API Updates -=========== - * Various instances of channel closure which provided a - `ClosureReason::HolderForceClosed` now provide more accurate - `ClosureReason`s, especially `ClosureReason::ProcessingError` (#3881). - * A new `ClosureReason::LocallyCoopClosedUnfundedChannel` was added (#3881). diff --git a/pending_changelog/3905-async-background-persistence.txt b/pending_changelog/3905-async-background-persistence.txt deleted file mode 100644 index caa16d34895..00000000000 --- a/pending_changelog/3905-async-background-persistence.txt +++ /dev/null @@ -1,9 +0,0 @@ -## API Updates (0.2) - -* The `Persister` trait has been removed, and `KVStore` is now used directly. If you're persisting `ChannelManager`, -`NetworkGraph`, or `Scorer` to a custom location, you can maintain that behavior by intercepting and rewriting the -corresponding namespaces and keys. - -* The `KVStore` trait has been updated to be asynchronous, while the original synchronous version is now available as -`KVStoreSync`. For channel persistence, `KVStoreSync` is still mandatory. However, for background persistence, an -asynchronous `KVStore` can be provided optionally. diff --git a/pending_changelog/3917-blinded-path-auth.txt b/pending_changelog/3917-blinded-path-auth.txt deleted file mode 100644 index 4917da4a3d2..00000000000 --- a/pending_changelog/3917-blinded-path-auth.txt +++ /dev/null @@ -1,3 +0,0 @@ -## Backwards Compat - -* Upgrading to v0.2.0 will invalidate existing `Refund`s containing blinded paths. diff --git a/pending_changelog/3918-expiry-time-when-waiting-often-offline-peer-asyncpayment.txt b/pending_changelog/3918-expiry-time-when-waiting-often-offline-peer-asyncpayment.txt deleted file mode 100644 index 61ae1cbd21c..00000000000 --- a/pending_changelog/3918-expiry-time-when-waiting-often-offline-peer-asyncpayment.txt +++ /dev/null @@ -1,4 +0,0 @@ - ## API Updates (0.2) - -* Upgrading to v0.2.0 will timeout any pending async payment waiting for the often offline peer - come online. diff --git a/pending_changelog/4045-sender-lsp.txt b/pending_changelog/4045-sender-lsp.txt deleted file mode 100644 index fa4243b5b7f..00000000000 --- a/pending_changelog/4045-sender-lsp.txt +++ /dev/null @@ -1,4 +0,0 @@ -## Backwards Compat - -* Downgrading to prior versions of LDK after setting `UserConfig::enable_htlc_hold` may cause - `ChannelManager` deserialization to fail or HTLCs to time out (#4045, #4046)