Skip to content

Reconstruct ChannelManager::claimable_payments on startup#4462

Draft
valentinewallace wants to merge 12 commits intolightningdevkit:mainfrom
valentinewallace:2026-02-rebuild-claimables
Draft

Reconstruct ChannelManager::claimable_payments on startup#4462
valentinewallace wants to merge 12 commits intolightningdevkit:mainfrom
valentinewallace:2026-02-rebuild-claimables

Conversation

@valentinewallace
Copy link
Contributor

@valentinewallace valentinewallace commented Mar 5, 2026

We're working on reducing the requirement to persist the ChannelManager, via
reconstructing the manager from Channel{Monitor} data on read.

Here we stop relying on persistence of the claimable_payments and
pending_claiming_payments maps.

Based on #4467

Partially addresses #4286

@ldk-reviews-bot
Copy link

👋 Hi! I see this is a draft PR.
I'll wait to assign reviewers until you mark it as ready for review.
Just convert it out of draft status when you're ready for review!

}

if !needs_post_close_monitor_update {
// If there's no need for a monitor update, just run the (possibly duplicative) completion
Copy link
Contributor Author

@valentinewallace valentinewallace Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this seems like an issue, we'll generate a PaymentClaimed event for this payment on each restart until the monitor is archived, IIUC...

Edit: addressed in #4467

Useful in the next commit because we'll need to repurpose the same handling
code for a new monitor update that is similar to ReleasePaymentComplete, but
for inbound payments to avoid regenerating redundant PaymentClaimed events.
When an Event::PaymentClaimed is processed by the user, we need to track that
so we don't keep regenerating the event redundantly on startup. Here we add a
monitor update for that.

Note that this update will only be generated for closed channels -- if the channel
is open, the inbound payment is pruned automatically when the HTLC is no longer
present in any unrevoked commitment transaction, which stops the redundant
event regeneration.

Upcoming commits will add generation and handling of this monitor update
When an Event::PaymentClaimed is processed by the user, we need to track that
so we don't keep regenerating the event redundantly on startup. Here we add a
map to claims in this state, similar to the existing htlcs_resolved_to_user map.

Note that this map will only be updated for closed channels -- if the channel
is open, the inbound payment is pruned automatically when the HTLC is no longer
present in any unrevoked commitment transaction, which stops the redundant
event regeneration.

Upcoming commits will add the actual tracking that uses this map, as well
as generating the relevant monitor update to trigger this tracking
When an Event::PaymentClaimed is processed by the user, we need to track that
so we don't keep regenerating the event redundantly on startup. Here we add an
event completion action for that.

Note that this action will only be generated for closed channels -- if the
channel is open, the inbound payment is pruned automatically when the HTLC is
no longer present in any unrevoked commitment transaction, which stops the
redundant event regeneration.

Similar to EventCompletionAction::ReleasePaymentCompleteChannelMonitorUpdate,
but for inbound payments.

Actual generation and handling of this completion action is added in an
upcoming commit.
Previously, if we had an inbound payment on a closed channel that we started
claiming but did not end up removing from the commitment tx, we would generate
a PaymentClaimed event for the payment on every restart until the
channelmonitor was archived.

The past few commits have laid the groundwork to get rid of this redundancy --
here we now generate an event completion action for when the PaymentClaimed is
processed by the user, at which point a monitor update will be released that
tells the channel monitor that the user has processed the PaymentClaimed event.
The monitor tracks this internally such that the pending claim will no longer
be returned to the ChannelManager when reconstructing the set of mid-claim
inbound payments on startup, and thus no longer generate the redundant event.
Upcoming commits work towards deprecating persistence of these maps, in favor
of reconstructing them from Channel{Monitor} data.
We'll need this in the next commit, and may be useful in the future to avoid
having a bunch of macro variants.
We're working on reducing the requirement to persist the ChannelManager, via
reconstructing the manager from Channel{Monitor} data on read.

Here we stop relying on persistence of the claimable_payments map. This is
trivial since we're building on previous work that stored inbound committed
HTLC onions in Channels as part of getting rid of forwarded HTLC map
persistence -- rather than persisting forwards explicitly, we started
persisting the corresponding inbound payment onions -> putting them into
ChannelManager::decode_update_add_htlcs on read --> the forwards will then be
re-decoded and the forward_htlcs maps will be repopulated on the next call to
process_pending_htlc_forwards.

The process for claimable_htlcs is the same. The onions for these committed
inbound receives are put back into ChannelManager::decode_update_add_htlcs, and
will repopulate themselves into the claimable_payments map on the next call to
process_pending_htlc_forwards.
In an upcoming commit, we'll be iterating over the payment preimage claims in
in-flight monitor updates as well as monitors, so update the loop now as a
prefactor.
Previously, if when we restart with the ChannelManager behind a ChannelMonitor,
and that monitor has already claimed an HTLC but the manager thinks the HTLC is
yet-to-be-claimed, we will generate a redundant preimage monitor update.

That was fine since the manager being behind the monitor is rare, but in
upcoming commits we are moving towards not persisting the ChannelManager's
pending_claiming_payments map, and instead rebuilding its contents from the
monitors.

As a consequence of these changes, our existing safeguard that prevents
generating redundant preimage monitor updates in the case that the manager is
*not* behind the monitor will no longer work, because we can no longer check
the pending_claiming_payments map to short circuit the logic that generates the
(possibly redundant) preimage monitor updates.

So here we implement more rigorous checks to see whether a preimage update for
a closed channel is redundant or not before generating it, which prevents the
current manager-behind cases as well as upcoming cases when we start
reconstructing the pending_claiming_payments map.
We are working on reducing reliance on regular persistence of the
ChannelManager, and instead reconstructing (at least parts of) it using
ChannelMonitor data on startup.

As part of this, here we stop persisting the manager's
pending_claiming_payments map, and reconstruct it from ChannelMonitor data.  We
already had most of the logic to do this that was in place for cases where the
manager has fallen behind the monitors. The only missing piece was checking
in-flight monitor updates in case they have preimages that tell us we're
mid-claim.
@valentinewallace valentinewallace force-pushed the 2026-02-rebuild-claimables branch from da6733a to f9e4f7c Compare March 6, 2026 22:12
@codecov
Copy link

codecov bot commented Mar 7, 2026

Codecov Report

❌ Patch coverage is 90.93199% with 36 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.10%. Comparing base (90b79e4) to head (f9e4f7c).
⚠️ Report is 9 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/channelmanager.rs 90.38% 29 Missing and 6 partials ⚠️
lightning/src/chain/channelmonitor.rs 95.23% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4462      +/-   ##
==========================================
+ Coverage   85.97%   86.10%   +0.13%     
==========================================
  Files         159      159              
  Lines      104722   105310     +588     
  Branches   104722   105310     +588     
==========================================
+ Hits        90030    90677     +647     
+ Misses      12191    12121      -70     
- Partials     2501     2512      +11     
Flag Coverage Δ
tests 86.10% <90.93%> (+0.13%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants