Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add archive_fully_resolved_monitors to ChainMonitor #2964

Merged
merged 2 commits into from
Apr 18, 2024

Conversation

jbesraa
Copy link
Contributor

@jbesraa jbesraa commented Mar 24, 2024

An attempt to resolve #2884

}
}
for funding_outpoint in to_remove {
monitors.remove(&funding_outpoint);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to do this at the storage layer too, but IMO we should do it by moving the monitor into a new pruned_monitors namespace, not deleting.

let mut monitors = self.monitors.write().unwrap();
let mut to_remove = Vec::new();
for (funding_outpoint, monitor_holder) in monitors.iter() {
if monitor_holder.monitor.is_stale() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced we want to unilaterally remove everything. We've generally recommended users prune only after both the monitor has no claimable balances (ie is_stale()) and some time has passed. We could track that internally in the ChannelMonitor (and maybe should) but if we don't we should have users specify the list of monitors to prune manually.

@@ -58,6 +58,11 @@ pub const CHANNEL_MONITOR_PERSISTENCE_SECONDARY_NAMESPACE: &str = "";
/// The primary namespace under which [`ChannelMonitorUpdate`]s will be persisted.
pub const CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE: &str = "monitor_updates";

/// The primary namespace under which [`ChannelMonitor`]s will be persisted.
pub const PRUNED_CHANNEL_MONITOR_PERSISTENCE_PRIMARY_NAMESPACE: &str = "pruned_monitors";
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheBlueMatt In which part of the code do you think I should use those namespaces? should I add a separate function or persist into this namespace before the remove?

@@ -501,6 +501,10 @@ impl<Signer: sign::ecdsa::WriteableEcdsaChannelSigner> chainmonitor::Persist<Sig
}
res
}

fn prune_persisted_channel(&self, _funding_txo: OutPoint) -> bool {
false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I do self.persister.prune.. here I get some type errors.. didnt have time today to look into it any further

let monitors_b = binding.first().unwrap();
let outpoint = monitors_b.0.clone();
dbg!(&outpoint);
// nodes[1].chain_monitor().unwrap().chain_monitor.prune_stale_channel_monitors(vec![outpoint]); // lock order problem
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test is still not there yet, I was hoping to at least run the prune_stale_channel_monitors function from here but got into a lock order problem

///
/// This is called when a channel is stale, ie we dont have balance to claim and its
/// closed.
fn prune_persisted_channel(&self, channel_funding_outpoint: OutPoint) -> bool;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO lets call this "archive" rather than prune?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, why bother with a return value?


/// Prune a channel's data.
///
/// This is called when a channel is stale, ie we dont have balance to claim and its
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a whole discussion of safety here and probably say something about how you really want to archive to hedge against LDK bugs but generally this is when a monitor is completely used up.

/// Prune a channel's data.
///
/// This is called when a channel is stale, ie we dont have balance to claim and its
/// closed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something about how all balances have converted to ClaimableOutput events?

@@ -1855,6 +1855,10 @@ impl<Signer: WriteableEcdsaChannelSigner> ChannelMonitor<Signer> {
spendable_outputs
}

pub(crate) fn is_stale(&self) -> bool {
self.get_claimable_balances().is_empty()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also persist the block height at which we hit this here, then only return true if its old. Ie the first time this hits empty we should update a field that just says "oh, we first hit empty at height X", persist that new field, then every time this is called after that we debug_assert that the claimable balances remain empty, and then when the height X reaches, say, two weeks old, go ahead and return true.

@TheBlueMatt
Copy link
Collaborator

This needs a small rebase now. Will you have time to work on this today or tomorrow? We'd like to cut an RC of the next release tomorrow and would like to include this. If not, no big deal, but I may take it over.

@TheBlueMatt TheBlueMatt added this to the 0.0.122 milestone Mar 28, 2024
for funding_txo in to_archive {
let channel_monitor = monitors.get(&funding_txo);
if let Some(channel_monitor) = channel_monitor {
if channel_monitor.monitor.is_stale()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this if is safe because rust goes through it LTR, so the channel will be archived only if its stale, and it will be removed only if it was archived

@@ -536,6 +576,46 @@ where
}
}
}
/// Read an archived channel monitor.
fn read_archived_channel_monitor(&self, archived_monitor_name: &MonitorName) -> Result<(BlockHash, ChannelMonitor<<SP::Target as SignerProvider>::EcdsaSigner>), io::Error> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is actually not used, leftover from some experiments

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I don't think we need to bother having this here currently. It may be useful for some users if we screw up but they can also just move things from one key to another to unarchive them.

@jbesraa
Copy link
Contributor Author

jbesraa commented Mar 28, 2024

This needs a small rebase now. Will you have time to work on this today or tomorrow? We'd like to cut an RC of the next release tomorrow and would like to include this. If not, no big deal, but I may take it over.

pushed new commits & rebased. I think the main thing that is missing now is testing. happy to complete this tomorrow.

@codecov-commenter
Copy link

codecov-commenter commented Mar 28, 2024

Codecov Report

Attention: Patch coverage is 68.98734% with 49 lines in your changes are missing coverage. Please review.

Project coverage is 90.03%. Comparing base (1d2a27d) to head (4b55043).
Report is 45 commits behind head on main.

Files Patch % Lines
lightning/src/util/persist.rs 31.91% 31 Missing and 1 partial ⚠️
lightning/src/chain/channelmonitor.rs 64.51% 11 Missing ⚠️
lightning/src/util/test_utils.rs 69.23% 4 Missing ⚠️
lightning/src/chain/chainmonitor.rs 95.00% 1 Missing ⚠️
lightning/src/ln/monitor_tests.rs 97.87% 0 Missing and 1 partial ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2964      +/-   ##
==========================================
+ Coverage   89.40%   90.03%   +0.62%     
==========================================
  Files         117      118       +1     
  Lines       96016   105242    +9226     
  Branches    96016   105242    +9226     
==========================================
+ Hits        85842    94753    +8911     
- Misses       7957     8297     +340     
+ Partials     2217     2192      -25     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for being willing to work on this on short notice!

@@ -934,6 +934,9 @@ pub(crate) struct ChannelMonitorImpl<Signer: WriteableEcdsaChannelSigner> {
/// Ordering of tuple data: (their_per_commitment_point, feerate_per_kw, to_broadcaster_sats,
/// to_countersignatory_sats)
initial_counterparty_commitment_info: Option<(PublicKey, u32, u64, u64)>,

/// The latest block height we've seen at the time of checking for stale channels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// The latest block height we've seen at the time of checking for stale channels.
/// The first block height at which we had no remaining claimable balances.

@@ -934,6 +934,9 @@ pub(crate) struct ChannelMonitorImpl<Signer: WriteableEcdsaChannelSigner> {
/// Ordering of tuple data: (their_per_commitment_point, feerate_per_kw, to_broadcaster_sats,
/// to_countersignatory_sats)
initial_counterparty_commitment_info: Option<(PublicKey, u32, u64, u64)>,

/// The latest block height we've seen at the time of checking for stale channels.
latest_stale_tip: Option<u32>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure the right name for this var, really.

Suggested change
latest_stale_tip: Option<u32>,
blanaces_empty_height: Option<u32>,

Comment on lines 1862 to 1894
fn latest_stale_tip(&self) -> Option<u32> {
let inner = self.inner.lock().unwrap();
inner.latest_stale_tip
}

fn set_latest_stale_tip(&self, latest_stale_tip: Option<u32>) {
let mut inner = self.inner.lock().unwrap();
inner.latest_stale_tip = latest_stale_tip;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need getters and setters for an internal field.

Comment on lines 1875 to 1891
let two_weeks = 2016;
let is_below_threshold = self.current_best_block().height > latest_stale_tip + two_weeks;
is_below_threshold && self.get_claimable_balances().is_empty()
} else {
let best_block = self.current_best_block();
self.set_latest_stale_tip(Some(best_block.height));
false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite sure what's going on here. The goal is to ensure that we've had get_claimable_balance empty for at least two weeks. Thus, we should (a) fetch get_claimable_balances, if its empty (1) set the field if its unset, or (2) if the field is set, return true if its two weeks old and (b) if the field is set make sure get_claimable_balances returned an empty set via debug_assertion.

@@ -536,6 +576,46 @@ where
}
}
}
/// Read an archived channel monitor.
fn read_archived_channel_monitor(&self, archived_monitor_name: &MonitorName) -> Result<(BlockHash, ChannelMonitor<<SP::Target as SignerProvider>::EcdsaSigner>), io::Error> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I don't think we need to bother having this here currently. It may be useful for some users if we screw up but they can also just move things from one key to another to unarchive them.

@jbesraa jbesraa force-pushed the prune-stale-chanmonitor branch 8 times, most recently from acf2c77 to 6d58544 Compare March 29, 2024 13:17
@jbesraa
Copy link
Contributor Author

jbesraa commented Mar 29, 2024

@TheBlueMatt I did cleanup the commits and completed the requested changes. I spent some time trying to test the archive_stale_channel_monitors function without success(error reference: https://github.com/lightningdevkit/rust-lightning/actions/runs/8481391972/job/23238659966?pr=2964#step:7:5206). There could be something wrong with the way I implemented the Persist trait for the TestPersistor.

Unfortunately, I have to drop off in a bit and wont have more time for this today. If you need this to be done today, feel free to take over, otherwise will continue tomorrow morning(EET).

@jbesraa jbesraa changed the title Add prune_stale_channel_monitors to ChainMonitor Add archive_stale_channel_monitors to ChainMonitor Mar 29, 2024
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments. Tomorrow is probably okay given we have a lot of other PRs to land too but if we make good progress today I'll push some updates here.

Comment on lines 1862 to 1863
/// Checks if the monitor is stale, meaning it has not been updated with a new block data in a
/// substantial amount of time and thus has no outputs to watch, ie
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to imply that a monitor is "stale" if the user has forgotten to feed it block data for a while. Maybe instead of calling it stale we go with something like fully_resolved and reword the docs here to just talk about how all outputs have been claimed and get_claimable_balances is empty?

Comment on lines 1866 to 1869
/// The first time this method is called it will save the current known height of the monitor
/// if all funds are claimed. Otherwise, if we yet to claim all funds, it will return false. If all funds
/// are claimed, it will return true if the monitor has not been updated with new block data in
/// a substantial amount of time referred as `threshold` and `balances_empty_height` is set.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just rephrase this whole paragraph as "will only return true once get_claimable_balances has been empty for at least two weeks.

Comment on lines 1877 to 1880
// This if can be ture at least in the second time this method is called. we check if
// the monitor has not been updated with new block data to watch new ouputs in a
// substantial amount(2016 blocks) of time meaning the channel is probably closed and
// this monitor is not expected to be able to claim any funds on chain.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment seems to imply we think that we actually expect get_claimable_balances to gain new entries after going empty, but we strongly do not. Can you rephrase it to highlight that we expect this to never happen, but we handle the case here to ensure if there's a bug we don't spuriously prune a monitor.

// the monitor has not been updated with new block data to watch new ouputs in a
// substantial amount(2016 blocks) of time meaning the channel is probably closed and
// this monitor is not expected to be able to claim any funds on chain.
debug_assert!(is_all_funds_claimed, "Trying to check if monitor is stale before all funds are claimed. Aborting.");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, but in release builds we should also handle the is_all_funds_claimed case - if we need an if is_all_funds_claimed { inner.balances_empty_height = None; balances_empty_height = None; }.

let is_all_funds_claimed = self.get_claimable_balances().is_empty();
let current_height = self.current_best_block().height;
let inner = self.inner.lock().unwrap();
let blanaces_empty_height = inner.blanaces_empty_height;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'm not sure there's any value in copying this into a local variable, it just means more stuff flying around.

// current height and start counting from there.
// This is to to make sure we don't consider the monitor stale on the first call to
// `is_stale` after all funds are claimed.
let mut inner = inner;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please drop this and just make the top inner mut if you need it.

&monitor
) {
Ok(()) => {}
Err(_e) => return chain::ChannelMonitorUpdateStatus::UnrecoverableError // TODO: Should we return UnrecoverableError here?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that's correct.

@TheBlueMatt
Copy link
Collaborator

Note that CI is still failing.

@jbesraa jbesraa force-pushed the prune-stale-chanmonitor branch 2 times, most recently from 4a2a20c to 36dbada Compare April 1, 2024 11:30
/// Prevents the channel monitor from being loaded on startup.
///
/// Archiving the data in a backup location (rather than deleting it fully) is useful for
/// hedging against data loss in case
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case...what? Please review your own commits before pushing to catch stuff like this.

/// This is useful for pruning fully resolved monitors from the monitor set and primary
/// storage so they are not kept in memory and reloaded on restart.
///
/// Depending on the `[Persist::archive_persisted_channel]` the monitor data could be moved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Depending on the `[Persist::archive_persisted_channel]` the monitor data could be moved
/// Depending on the implementation of [`Persist::archive_persisted_channel`] the monitor data could be moved

@@ -259,6 +259,21 @@ fn do_chanmon_claim_value_coop_close(anchors: bool) {

check_closed_event!(nodes[0], 1, ClosureReason::LocallyInitiatedCooperativeClosure, [nodes[1].node.get_our_node_id()], 1000000);
check_closed_event!(nodes[1], 1, ClosureReason::CounterpartyInitiatedCooperativeClosure, [nodes[0].node.get_our_node_id()], 1000000);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you going to address this?

@jbesraa jbesraa force-pushed the prune-stale-chanmonitor branch 4 times, most recently from 513557a to f7a735d Compare April 12, 2024 20:46
@jbesraa jbesraa changed the title Add archive_stale_channel_monitors to ChainMonitor Add archive_fully_resolved_monitors to ChainMonitor Apr 12, 2024
Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No major feedback, thanks for tackling this :)

lightning/src/chain/chainmonitor.rs Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/chainmonitor.rs Outdated Show resolved Hide resolved
lightning/src/ln/monitor_tests.rs Show resolved Hide resolved
@jbesraa
Copy link
Contributor Author

jbesraa commented Apr 16, 2024

Thanks @valentinewallace
Addressed your comments

lightning/src/util/persist.rs Outdated Show resolved Hide resolved
lightning/src/util/persist.rs Outdated Show resolved Hide resolved
lightning/src/util/persist.rs Outdated Show resolved Hide resolved
@valentinewallace
Copy link
Contributor

You have several instances of trailing whitespace in the top commit, git show should show you where it is.

lightning/src/chain/chainmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/chainmonitor.rs Outdated Show resolved Hide resolved
(None, true) => {
// Claimed all funds but `balances_empty_height` is None. It is set to the
// current block height.
inner.balances_empty_height = Some(current_height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should also log here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should add a log "Will archive fully resolved ChannelMonitor for funding txo {}, after 4032 blocks"
idea being it is well before the actual archive and can serve as warning.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I missed this one somehow.

lightning/src/chain/chainmonitor.rs Show resolved Hide resolved
lightning/src/chain/chainmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
///
/// Archiving the data in a backup location (rather than deleting it fully) is useful for
/// hedging against data loss in case of unexpected failure.
fn archive_persisted_channel(&self, channel_funding_outpoint: OutPoint);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at multiple places i see error handling related to "got an error -> ignore"
should we consider returning a Result<(), io::Error> from here and ignore error where archive_persisted_channel is being used?

it could also be helpful for users implementing their own Persist trait, if we ignore it on our own.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why LDK swallowing a user-provided error is better than it being visible to the user that LDK will do nothing with their error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently, users will be questioning "what to do with error", while implementing this interface.
it would have been nice in the sense that we take the decision for them or handle the error if we can.

imo, a user facing interface should only be non-fallible if there is no way we can handle it.

anyway, not holding this PR for this, it is ok if we leave it like this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo, a user facing interface should only be non-fallible if there is no way we can handle it.

I think I take a bit of a different view - in this case we're not "handling it", but rather we're ignoring it - adding an error type would be confusing as it would imply to the user that we'll do something with the error and that they need to correctly propagate errors, whereas that's not actually true - the error will be ignored and if they have an error they should consider whether it is fatal for themselves.

let (_, _, chan_id, funding_tx) =
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 1_000_000);

nodes[0].node.close_channel(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test-case starts with a closed channel, is it worth it to attempt archiving during other states, and ensure we don't archive?
(could be helpful in case of catching a regression)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a handful of other tests in this file that at least test that get_claimable_balance is correct in all the states we could enumerate, so it should at least be safe there, we could add more assertions that the monitor isn't prunable in those tests later.

  Checks if the monitor is fully resolved. Resolved monitor is one that has claimed all of
  its ouputs and balacnes.

  This function returns true only if `get_claimable_balances` has been empty for at least
  2016 blocks.
@TheBlueMatt
Copy link
Collaborator

Went ahead and addressed the latest round of feedback and squashed - this is now the only remaining PR for the 0.0.123 RC

lightning/src/chain/chainmonitor.rs Outdated Show resolved Hide resolved
lightning/src/util/persist.rs Outdated Show resolved Hide resolved
  Archives fully resolved channel monitors by adding them to a backup
  location and removing them from the primary storage & the monitor set.

  This is useful for pruning fully resolved monitors from the monitor
  set and primary storage so they are not reloaded on every new new
  block connection.

  We also add a new function, `archive_persisted_channel` to the
  `Persist` trait that writes the monitor to an archive storage and
  removes it from the primary storage.
@TheBlueMatt
Copy link
Collaborator

Addressed @valentinewallace's feedback.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM if @G8XSU is happy.

Copy link
Contributor

@G8XSU G8XSU left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm !
Minor nits.

/// its outputs and balances (i.e. [`Self::get_claimable_balances`] returns an empty set).
///
/// This function returns true only if [`Self::get_claimable_balances`] has been empty for at least
/// 2016 blocks as an additional protection against any bugs resulting in spuriously empty balance sets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: doc update to 4032 blocks

(None, true) => {
// Claimed all funds but `balances_empty_height` is None. It is set to the
// current block height.
inner.balances_empty_height = Some(current_height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should add a log "Will archive fully resolved ChannelMonitor for funding txo {}, after 4032 blocks"
idea being it is well before the actual archive and can serve as warning.

///
/// Archiving the data in a backup location (rather than deleting it fully) is useful for
/// hedging against data loss in case of unexpected failure.
fn archive_persisted_channel(&self, channel_funding_outpoint: OutPoint);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently, users will be questioning "what to do with error", while implementing this interface.
it would have been nice in the sense that we take the decision for them or handle the error if we can.

imo, a user facing interface should only be non-fallible if there is no way we can handle it.

anyway, not holding this PR for this, it is ok if we leave it like this.

@TheBlueMatt
Copy link
Collaborator

Merging since this has ACKs and then we can cut the release, will open a followup in a bit.

@TheBlueMatt TheBlueMatt merged commit 195e666 into lightningdevkit:main Apr 18, 2024
15 of 16 checks passed
@TheBlueMatt TheBlueMatt mentioned this pull request Apr 18, 2024
G8XSU added a commit that referenced this pull request Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add method to delete channel state while online
5 participants