-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an async resolution option to ChainAccess::get_utxo
#1980
Add an async resolution option to ChainAccess::get_utxo
#1980
Conversation
29b4b1a
to
18d1a3e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Did a first pass, mostly questions and nits.
18d1a3e
to
1f07cb6
Compare
Addressed all the feedback and pushed one additional commit - |
77ded8c
to
52950dc
Compare
Pushed a fix for the MSRV. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, mostly just minor comments
52950dc
to
1f63203
Compare
Codecov ReportBase: 90.91% // Head: 90.83% // Decreases project coverage by
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files@@ Coverage Diff @@
## main #1980 +/- ##
==========================================
- Coverage 90.91% 90.83% -0.08%
==========================================
Files 99 100 +1
Lines 52505 53061 +556
Branches 52505 53061 +556
==========================================
+ Hits 47735 48199 +464
- Misses 4770 4862 +92
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
61c310a
to
ca2f0b6
Compare
This is ready to squash. Will take another look then. |
ca2f0b6
to
ebffd47
Compare
Squashed without further changes. |
LGTM once CI is fixed on the
|
ebffd47
to
43f1b37
Compare
Fixed per-commit compile, without diff from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two post-ACK questions regarding the backpressure that came to mind. Not sure if they are trivial.
Had to rebase after #2016 landed. Luckily this removed a number of |
dbe2f63
to
4223d2e
Compare
The `chain::Access` trait (and the `chain::AccessError` enum) is a bit strange - it only really makes sense if users import it via the `chain` module, otherwise they're left with a trait just called `Access`. Worse, for bindings users its always just called `Access`, in part because many downstream languages don't have a mechanism to import a module and then refer to it. Further, its stuck dangling in the `chain` top-level mod.rs file, sitting in a module that doesn't use it at all (it's only used in `routing::gossip`). Instead, we give it its full name - `UtxoLookup` (and rename the error enum `UtxoLookupError`) and put it in the a new `routing::utxo` module, next to `routing::gossip`.
This commit is deliberately move-only, though the code being moved is somewhat crufty.
`check_channel_announcement` had long lines, a (very-)stale TODO and confusing variable assignment, which is all cleaned up here.
For those operating in an async environment, requiring `ChainAccess::get_utxo` return information about the requested UTXO synchronously is incredibly painful. Requesting information about a random UTXO is likely to go over the network, and likely to be a rather slow request. Thus, here, we change the return type of `get_utxo` to have both a synchronous and asynchronous form. The asynchronous form requires the user construct a `AccessFuture` which they `clone` and pass back to us. Internally, an `AccessFuture` has an `Arc` to the `channel_announcement` message which we need to process. When the user completes their lookup, they call `resolve` on their `AccessFuture` which we pull the `channel_announcement` from and then apply to the network graph.
If we receive two `channel_announcement`s for the same channel at the same time, we shouldn't spawn a second UTXO lookup for an identical message. This likely isn't too rare - if we start syncing the graph from two peers at the same time, it isn't unlikely that we'll end up with the same messages around the same time. In order to avoid this we keep a hash map of all the pending `channel_announcement` messages by SCID and simply ignore duplicate message lookups.
If we have a `channel_announcement` which is waiting on a UTXO lookup before we can process it, and we receive a `channel_update` or `node_announcement` for the same channel or a node which is a part of the channel, we have to wait until the lookup completes until we can decide if we want to accept the new message. Here, we store the new message in the pending lookup state and process it asynchronously like the original `channel_announcement`.
When we process gossip messages asynchronously we may find that we want to forward a gossip message to a peer after we've returned from the existing `handle_*` method. In order to do so, we need to be able to send arbitrary loose gossip messages back to the `PeerManager` via `MessageSendEvent`. This commit modifies `MessageSendEvent` in order to support this.
4223d2e
to
8529164
Compare
Gossip messages which were verified against the chain asynchronously should still be forwarded to peers, but must now go out via a new `P2PGossipSync` parameter in the `AccessResolver::resolve` method, allowing us to wire them up to the `P2PGossipSync`'s `MessageSendEventsProvider` implementation.
Now that we allow `handle_channel_announcement` to (indirectly) spawn async tasks which will complete later, we have to ensure it can apply backpressure all the way up to the TCP socket to ensure we don't end up with too many buffers allocated for UTXO validation. We do this by adding a new method to `RoutingMessageHandler` which allows it to signal if there are "many" checks pending and `channel_announcement` messages should be delayed. The actual `PeerManager` implementation thereof is done in the next commit.
Now that the `RoutingMessageHandler` can signal that it needs to apply message backpressure, we implement it here in the `PeerManager`. There's not much complicated here, aside from noting that we need to add the ability to call `send_data` with no data to indicate that reading should resume (and track when we may need to make such calls when updating the routing-backpressure state).
When we apply the new gossip-async-check backpressure on peer connections, if a peer has never sent us a `channel_announcement` at all, we really shouldn't delay reading their messages. This does so by tracking, on a per-peer basis, whether they've sent us a channel_announcement, and resetting that state whenever we're not backlogged.
...and switch the same in `lightning-net-tokio`
This ensures its always written after we update the graph, no matter how we updated the graph.
8529164
to
1f05575
Compare
For those operating in an async environment, requiring
ChainAccess::get_utxo
return information about the requested UTXOsynchronously is incredibly painful. Requesting information about a
random UTXO is likely to go over the network, and likely to be a
rather slow request.
Thus, here, we change the return type of
get_utxo
to have both asynchronous and asynchronous form. The asynchronous form requires
the user construct a
AccessFuture
which theyclone
and passback to us. Internally, an
AccessFuture
has anArc
to thechannel_announcement
message which we need to process. When theuser completes their lookup, they call
resolve
on theirAccessFuture
which we pull thechannel_announcement
from andthen apply to the network graph.
Fixes #1975.