New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add transaction sync crate #1870
Add transaction sync crate #1870
Conversation
7b952c8
to
c0e0a9e
Compare
c.transaction_unconfirmed(&txid); | ||
} | ||
|
||
locked_watched_transactions.insert(txid); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now included this essentially as a hotfix for the latest feedback from lightningdevkit/ldk-node#9. However, I'm still not sure if we shouldn't rather go either with the "simply always confirm everything" or the "re-register outputs via load_outputs_to_watch
upon start of the sync round" approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
putting this here causes it so confirmed txs are never added to watched_transactions
and thus never confirmed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, I'm not sure why they never would be confirmed?
Generally all transactions will be added to locked_watched_transactions
when the Filter
queues are processed via process_queues()
. Any transaction returned by get_relevant_txids()
only needs to be monitored for re- and unconfirmations, in which case we call transaction_confirmed
and re-add them to the list to monitor for confirmation, i.e., watched_transactions
. Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't debugged it yet but we're using a copy-paste of this in mutiny and transactions would never get confirmed unless we added txs that were confirmed here
https://github.com/BitcoinDevShop/mutiny-web-poc/blob/master/node-manager/src/chain.rs
This was our bug fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, I'm still not clear why that would happen when only re-adding the unconfirmed transactions.
I now cleaned up the approach a bit with 5d94cb6. Could you check whether this mitigates what you saw before?
Btw. I now also migrated to future::lock::Mutex
. Does this allow you to re-use the create instead of copy-pasting?
5ca85f9
to
d696d42
Compare
Seems due to the dependencies of |
b9d8e26
to
5e27d3c
Compare
Cool! A few assorted notes:
|
No, I think while I covered a lot of cases, some edge cases still remain. I'm currently considering whether adding a |
I don't think adding a new field to calls "breaks the API" - isn't the point of JSON that we can add new fields and old clients will ignore them? Have you chatted with upstream yet? |
43a3bb9
to
d5f43c7
Compare
Fair enough. Will look into opening corresponding issue/PR upstream. If there's much push-back, we might have above mentioned variant as a fallback option. |
Now opened Blockstream/electrs#52, which adds the Unfortunately it's not as straight forward to add a similar field to the |
I'm confused how that's sufficient here - can't the server do a reorg while we're making other queries and then reorg back to the original chain before we get to check it? |
All other calls (only We may be able to save some of these calls if upstream also added a |
d5f43c7
to
5d94cb6
Compare
2e7acb2
to
0e60c2a
Compare
0e60c2a
to
17eafed
Compare
@TheBlueMatt Rebased on main and now uses the recent release of Also, I think the observation @andrei-21 recently made is correct: even in a reorg-then-reorg-back scenario the tip would change (as it only would reorg-back if there is a longer chain featuring a new tip). As we check tip consistency just before handing over any changes to LDK, we would detect any reorg that happened in the meantime. I therefore think the currently taken fail-restart approach should be safe as is. Note that we'd currently also restart if we're still on the same chain but a new tip has been appended. If upstream added the tip hash field to the block and tx statuses, we may even be able to recover and continue the sync in this case, but this would mostly be a performance improvement I think. |
No, this is not correct. Bitcoin Core will expose the intermediary state. So while, yes, eventually we'll get to a new tip, the tip may jump back to the previous tip, the Bitcoin Core will drop the |
05d35b4
to
fa70746
Compare
63e59fb
to
0c677d4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like CI is unhappy for 1.57. Otherwise, looks good. Feel free to rebase and leave any comments needing large changes to a follow-up.
f600374
to
58639e1
Compare
58639e1
to
dfb9a58
Compare
Squashed the fixups. |
dfb9a58
to
d9b2fc2
Compare
d79d2e2
to
45d4146
Compare
Squashed fixups without further changes. |
45d4146
to
d458011
Compare
Squashed again removing superfluous whitespace: git diff-tree -U2 45d41467 d4580117
diff --git a/lightning-transaction-sync/src/esplora.rs b/lightning-transaction-sync/src/esplora.rs
index 9f109cb1..807ef807 100644
--- a/lightning-transaction-sync/src/esplora.rs
+++ b/lightning-transaction-sync/src/esplora.rs
@@ -213,5 +213,5 @@ where
sync_state.watched_transactions.remove(&ctx.tx.txid());
-
+
for input in &ctx.tx.input {
sync_state.watched_outputs.remove(&input.previous_output); |
This crate provides utilities for syncing LDK via the transaction-based `Confirm` interface. The initial implementation facilitates synchronization with an Esplora backend server.
d458011
to
ce8b5ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline last week. I wasn't able to get the integration test to run on my corp machine. Not sure why but seems like the connection to bitcoind gets disconnected when waiting for blocks. Shouldn't be a blocker, though.
This crate provides utilities for syncing LDK via the transaction-based
Confirm
interface. The initial implementation facilitates synchronization with an Esplora backend server.Upstreamed from lightningdevkit/ldk-node#9.