-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
watchtower: integrate altruist watchtower and watchtower client #3133
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Eye of Sauron casts its gaze upon the Lightning Network....
First pass review completed, will need to revisit some of the existing package as well before my next pass. Stoked to set up this latest version on my nodes!
@@ -400,6 +405,13 @@ func (l *channelLink) Start() error { | |||
l.overflowQueue.Start() | |||
l.hodlQueue.Start() | |||
|
|||
if l.cfg.TowerClient != nil { | |||
err := l.cfg.TowerClient.RegisterChannel(l.ChanID()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the impact of re-registering each time the link is created? Should this instead be done once for all active channels once the daemon starts up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the first execution will generate a sweep pkscript and cause a db write, after that it always hits an in-memory cache protected by a mutex.
i did consider that, but by placing it here we ensure we only generate pkscripts for usable channels and it is the most natural place from a perspective of lazily registering the channel and not make any assumptions about whether this is called before being added to the switch
lnd_test.go
Outdated
// testRevokedCloseRetributionAltruistWatchtower tests that Dave is able | ||
// carry out retribution in the event that she fails in state where the remote | ||
// commitment output has zero-value. | ||
func testRevokedCloseRetributionAltruistWatchtower(net *lntest.NetworkHarness, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has a lot of overlap with the existing revocation related itests (step, db state copy, breach). I think much of this can be consolidated, similar to the assertDLPExecuted
method I added when the SCB DLP tests were introduced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading it a bit more, there's a bit less overlap than I originally suspected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could still refactor this slightly to not cause a large change when adding tests for private towers with rewards.
@Roasbeef ptal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few nits from me -- excited to finally see all of the pieces working together!
lnd_test.go
Outdated
// testRevokedCloseRetributionAltruistWatchtower tests that Dave is able | ||
// carry out retribution in the event that she fails in state where the remote | ||
// commitment output has zero-value. | ||
func testRevokedCloseRetributionAltruistWatchtower(net *lntest.NetworkHarness, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could still refactor this slightly to not cause a large change when adding tests for private towers with rewards.
Expand
|
9082808
to
f9eda3a
Compare
@@ -561,6 +561,7 @@ func (c *TowerClient) backupDispatcher() { | |||
// Wait until we receive the newly negotiated session. | |||
// All backups sent in the meantime are queued in the | |||
// revoke queue, as we cannot process them. | |||
awaitSession: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible for the client to not gracefully shut down if it's waiting here. In the select
statement for once we have an active session we seem to handle the case where the backup queue is shut down, but not here.
Relevant profile:
1 @ 0x1030e1f 0x10407f8 0x19b991d 0x105e331
# 0x19b991c github.com/lightningnetwork/lnd/watchtower/wtclient.(*TowerClient).backupDispatcher+0xc5c /Users/user/git/lnd/watchtower/wtclient/client.go:611
1 @ 0x1030e1f 0x10407f8 0x19bc7d5 0x105e331
# 0x19bc7d4 github.com/lightningnetwork/lnd/watchtower/wtclient.(*sessionNegotiator).negotiationDispatcher+0x134 /Users/user/git/lnd/watchtower/wtclient/session_negotiator.go:177
1 @ 0x1030e1f 0x10407f8 0x19bd102 0x105e331
# 0x19bd101 github.com/lightningnetwork/lnd/watchtower/wtclient.(*sessionNegotiator).negotiate+0x6a1 /Users/user/git/lnd/watchtower/wtclient/session_negotiator.go:236
1 @ 0x1030e1f 0x10412d9 0x10412af 0x1040f59 0x10751f5 0x19c2bc1 0x10742f3 0x19b8484 0x1aa9245 0x10742f3 0x1a85f34 0x1a4b702 0x1ab6b06 0x1030a2c 0x105e331
# 0x1040f58 sync.runtime_Semacquire+0x38 /usr/local/go/src/runtime/sema.go:56
# 0x10751f4 sync.(*WaitGroup).Wait+0x64 /usr/local/go/src/sync/waitgroup.go:130
# 0x19c2bc0 github.com/lightningnetwork/lnd/watchtower/wtclient.(*TowerClient).Stop.func1+0xa0 /Users/user/git/lnd/watchtower/wtclient/client.go:415
# 0x10742f2 sync.(*Once).Do+0xb2 /usr/local/go/src/sync/once.go:44
# 0x19b8483 github.com/lightningnetwork/lnd/watchtower/wtclient.(*TowerClient).Stop+0x53 /Users/user/git/lnd/watchtower/wtclient/client.go:387
# 0x1aa9244 github.com/lightningnetwork/lnd.(*server).Stop.func1+0x354 /Users/user/git/lnd/server.go:1312
# 0x10742f2 sync.(*Once).Do+0xb2 /usr/local/go/src/sync/once.go:44
# 0x1a85f33 github.com/lightningnetwork/lnd.(*server).Stop+0x53 /Users/user/git/lnd/server.go:1283
# 0x1a4b701 github.com/lightningnetwork/lnd.Main+0x1241 /Users/user/git/lnd/lnd.go:489
# 0x1ab6b05 main.main+0x25 /Users/user/git/lnd/cmd/lnd/main.go:14
# 0x1030a2b runtime.main+0x20b /usr/local/go/src/runtime/proc.go:200
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep this fixed locally setting a ForceQuitDelay
in the client's conf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should it still wait for the delay if there aren't any queued tasks though?
f4c8f19
to
108cafb
Compare
This commit moves the newSweepPkScript function previously in the nursery to be a helper function within the server. Additionally, the function now returns a closure that satisfies the configuration interfaces of several other subsystems. As a result, the configuration sites contain much less boilerplate, as it's now encapsulated in the newSweepPkScriptGen helper.
This commit fixes a bug that would cause us to request more sessions that needed from the session negotiator. With the current stat ticker, we'd ask the negotiator for a new session every 30s if session session negotiation did not return before printing the stats. Now we'll avoid requesting to sessions by jumping back into the select loop.
This commit splits out the parameters that shape the justice transaction into their own struct, which then embedded within the overarching wtpolicy.Policy which may have additional parameters describing operation of the session. This is done as a preliminary step to support comparison of sessions based on matching TxPolicy configurations. This prevents otherwise identical Policies from being counted as different if operational parameters like MaxUpdates differ, even if it has no material difference to the justice transaction.
This commit modifies the client's filtering when selecting from existing sessions. The new logic compares the configured TxPolicy with the TxPolicy of the candidate sessions, which has the effect of ignoring operational parameters such as MaxUpdates. Prior, changing MaxUpdates would cause the client to request a new session even if it had perfectly good slots available in a policy with an equal TxPolicy.
BOLT01 reserves the following ranges: - Setup & Control (types 0-31) - Channel (types 32-127) - Commitment (types 128-255) - Routing (types 256-511) The wtwire messages now occupy 600-607.
Modifies the bbolt and mock tower databases to only accept blobs that are the expected size of the session's blob type. This prevents resource exhaustion attacks where a client may provide disproportionately large encrypted blob, even though all supported blob types are of fixed-size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🧿
This PR finalizes the bulk of the watchtower integration, by supporting the ability to run lnd with a companion tower and also as a client to backup justice transactions to a remote tower. This PR does not expose the setting up a tower to receive rewards from the justice transaction, as there are some lingering todos to address before it can be safely enabled.
If you're interested in testing this PR, I'd suggest creating a tower listening on localhost and pointing the client towards your own daemon. This will allow you to monitor both end points at once and also delete db state on both ends if any bugs are encountered. The relevant config options are:
For the time being, I'd suggest NOT pointing the client towards a remote tower (or listening publicly). There are a few small changes that may happen from now until final merge, having both client and tower locally makes it easy to wipe the dbs and proceed.