New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[watchtower/lookout]: on-chain breach monitoring #2124

Merged
merged 13 commits into from Nov 14, 2018

Conversation

Projects
None yet
2 participants
@cfromknecht
Copy link
Collaborator

cfromknecht commented Oct 30, 2018

This PR introduces the watchtower/lookout package, which handles the responsibility of monitoring the chain for possible breaches and responding by decrypting and broadcasting any justice transactions that its clients had previously uploaded. Together with the watchtower/server, which receives and stores encrypted blobs from clients, this represents that second primary service enabling the tower to act on behalf of the tower's clients.

At a high level, the lookout service receives input via two sources: block events and it's database. Incoming state updates are continually written to the database, and are made available to the lookout as soon as the tower successfully persists the encrypted blob. As new blocks come in, the tower searches for any breach hints matching the txid prefixes contained in the newly found block. If any matches are generated from the query, the lookout service will dispatch an attempt to decrypt and sweep the transaction on behalf of the user.

Some slight modifications have been made to the watchtower/blob package, most notably:

  • introducing support for padded sweep adresses
  • properly generating DER-encoded signatures for use in breach input witness stacks

Some open questions:

  • What is the ideal reward function for the tower? Currently only a proportional cut is taken, though perhaps a base + proportional is more suitable. As such, some edge cases around how to handle dust outputs are not accounted for in this PR. The intention is to revist these edge cases after some feedback and when finishing up full persistence in the watchtower/wtdb package.
  • Should we apply any sort of input/output sorting (a la BIP69) to the justice transactions? It is certainly possible, though would be interested in hearing thoughts on whether it's necessary considering the justice transactions are somewhat identifiable anyway.

Builds on #2122

@cfromknecht cfromknecht force-pushed the cfromknecht:wtlookout branch from dc02c8e to b701709 Oct 31, 2018

@cfromknecht cfromknecht force-pushed the cfromknecht:wtlookout branch from b701709 to d9dc2b9 Oct 31, 2018

watchtower/blob/justice_kit: return DER signatures
This commit fixes an issue with the witness stack
construction for to-local and to-remote inputs,
that would cause the justice kit to return
signatures as fixed-size, 64-byte signatures.
The correct behavior is to return DER-encoded
signatures so that they will properly verify on
the network, since the consensus rules won't
be able to understand the fixed-size variant.

@cfromknecht cfromknecht force-pushed the cfromknecht:wtlookout branch from d9dc2b9 to 4e27940 Nov 1, 2018

@Roasbeef
Copy link
Member

Roasbeef left a comment

One small step for lnd, one giant leap for the Lightning Network! ⚡️🚀

I've completed an initial pass, will likely do another to cover the set of unit tests added. The only major comment concerns the nonce generation. I'm a bit weary of using a sequence based nonce, as it puts a lot of responsibility on the client to ensure it doesn't send with a duplicate nonce lest it leaks plaintext blobs.

// that the client will not reuse the same (key, nonce) pair, since we must
// accept that the keys can be reused.
func (s SessionID) NonceFromSeqNum(seqnum uint16) []byte {
nonce := make([]byte, blob.NonceSize)

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

API design level comment: blob isn't the most descriptive name for a package. It's possible to ensure isolation at the unit test level, yet still intermingle components within a unified package.

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

not sure what you mean by the last comment, can you explain?

// The nonce is derived in this manner to protect confidentiality of the
// payloads when backing up state updates when multiple backups will be made.
// Since the encryption key will always be equal to the txid, we must take care
// not to reuse the same nonce when performing multiple backups of the same

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

What measures will we take on the client side in order to ensure this never happens? It seems certain fault cases on the client side could possibly cause it to re-encrypt with the same nonce sequence.

As an alternative, we can use randomized 192-bit (24 byte) nonce. I'm planning to go this route with the encrypted static backups. The chacha API in crypto/x also exposes an variant with larger nonces: https://godoc.org/golang.org/x/crypto/chacha20poly1305#NewX

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

Nice! Wasn't aware that the 192-bit variant was exposed, I agree the randomization is the safer approach.

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

with the existing nonce generation, the protections would be to use truly ephemeral keys (instead of HD keys) and to persist an encrypted update under for the session/seqnum so it can be resent on startup. arguably the first should still be done with a randomized nonce, though it does simplify the latter at the expense of tower storage


// BlockFetcher supports the ability to fetch blocks.
type BlockFetcher interface {
// FetchBlockByHash fetchs the block given the target block hash.

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Method name doesn't mach the godoc comment.

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

Fixed

// provided for each transaction included at the provided block epoch.
// If any matches are found, they will be returned along with encrypted
// blobs so that justice can be exacted.
FindMatches(*chainntnfs.BlockEpoch, []wtdb.BreachHint) ([]wtdb.Match, error)

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Why not give it the entire block? As is, the block epochs just contain height+hash. If we give this method the entire block, then we don't require it to do any network I/O, and also it can be extended to batch search blocks.

On the second parameter: doesn't the database know the entire set of breach hints? So we can simplify to just take a set of blocks?

To be correct, a breach hint should be provided for each transaction included at the provided block epoch.
So we call this once we already know that the set of transactions has a breach within the block?

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

This just a database query, there's no network I/O here. The breach hints passed in are constructed from the block, though they could be constructed from multiple blocks.

The block epoch is passed just so that the database can record the last processed block height, so after startup it knows where to begin searching. i'm thinking about just making this a separate db call though.

*chainntnfs.BlockEpoch) (*chainntnfs.BlockEpochEvent, error)
}

// EpochSource delivers an in-order stream of blocks as they are seen on the

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Why not just use something like:

Suggested change Beta
// EpochSource delivers an in-order stream of blocks as they are seen on the
type EpochSource interface {
RegisterBlockEpochNtfn(*BlockEpoch) (*BlockEpochEvent, error)
Start() error
Stop() error
}

So a simplified interface that includes only the methods of the greater ChainNotifier interface that we actually care about.

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

not sure i follow, are you suggesting we'd pass in the notifier? at this point, it will have already been started and it's lifecycle would be tied to the watchtower

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

i removed the epoch stream entirely, so that block fetching happens w/in the lookout. i think the testabilty is roughly equivalent, but spares a lot of code duplication!

// network.
PublishTx func(*wire.MsgTx) error

// TODO(conner) add DB methods for spend tracking

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Spend tracking?

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

just a reminder to add persistent tracking of published justice txns and then monitor for spends to see if ours confirms or not. I've updated the comment to be more precise

return err
}

// TODO(conner): register for spend and remove from db after

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Seems important? Otherwise, it'll keep attempting to re-broadcast on restart?

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

we won't rebroadcast at the moment, but these should be implemented together in a following PR


log.Infof("Starting lookout")

startEpoch, err := l.cfg.DB.GetLastMatchedEpoch()

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

So it'll rescan all blocks since the last time a breached happend? Seems a bit wasteful, and instead we can rescan blocks based on the session height of a client to ensure that we don't miss any breaches that might've happend while they were down, just in case they missed a blob send.

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

it'll rescan since the last epoch that caused a db query

select {
case epoch := <-epochs:

// TODO(conner): detect skipped epochs

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Similar Q as earlier: we shouldn't need to?

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

Fixed

// Iterate over the transactions contained in the block, deriving a
// breach hint for each transaction and constructing an index mapping
// the hint back to it's original transaction.
var hintToTx = make(map[wtdb.BreachHint]*wire.MsgTx, numTxnsInBlock)

This comment has been minimized.

@Roasbeef

Roasbeef Nov 1, 2018

Member

Why vars? (here and below)

This comment has been minimized.

@cfromknecht

cfromknecht Nov 1, 2018

Collaborator

Fixed

@cfromknecht cfromknecht force-pushed the cfromknecht:wtlookout branch from 4e27940 to bb8cff5 Nov 1, 2018

cfromknecht added some commits Nov 1, 2018

watchtower/blob/justice_kit: use randomized 192-bit nonce
This commit modifies the blob encryption scheme to
use chacha20-poly1305 with a randomized 192-bit nonce.
The previous approach used a deterministic nonce scheme,
which is being replaced to simplify the requirements of
a correct implementation.  As a result, each payload
gains an addtional 24-bytes prepended to the ciphertext.
watchtower/blob/justice_kit_test: remove external nonce
The nonce is now passed in as the prefix to the
ciphertext, and is generated randomly in calls
to Encrypt.

@cfromknecht cfromknecht force-pushed the cfromknecht:wtlookout branch from bb8cff5 to 8bc8964 Nov 1, 2018

@Roasbeef
Copy link
Member

Roasbeef left a comment

LGTM 🧬

@Roasbeef Roasbeef merged commit 2f0bc5c into lightningnetwork:master Nov 14, 2018

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage increased (+0.1%) to 56.034%
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment