-
Notifications
You must be signed in to change notification settings - Fork 417
lightning-liquidity
persistence: Add serialization logic for services and event queue
#4059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Thanks for assigning @TheBlueMatt as a reviewer! |
124211d
to
26f3ce3
Compare
a98dff6
to
d630c4e
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4059 +/- ##
==========================================
- Coverage 88.76% 88.59% -0.17%
==========================================
Files 176 178 +2
Lines 129345 129876 +531
Branches 129345 129876 +531
==========================================
+ Hits 114812 115064 +252
- Misses 11925 12192 +267
- Partials 2608 2620 +12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
We add `KVStore` to `LiquidityManager`, which will be used in the next commits. We also add a `LiquidityManagerSync` wrapper that wraps a the `LiquidityManager` interface which will soon become async due to usage of the async `KVStore`.
d630c4e
to
70118e7
Compare
70118e7
to
dd43edc
Compare
this all LGTM. I have a small concern: maybe I’m being a little paranoid, but read_lsps2_service_peer_states and read_lsps5_service_peer_states pull every entry from the KVStore into memory with no limit. That could lead to unbounded state, exhausting memory and crash. Maybe we can add a limit on how many entries we load into memory to protect against this dos? not sure how realistic this is though. maybe an attacker could have access to or share the same storage with the victim, and they could dump effectively infinite data onto disk. in this scenario, probably the victim would be vulnerable to other attacks too, but still.. |
Reading state from disk (currently) happens on startup only, so crashing wouldn't be the worst thing, we would simply fail to start up properly. Some even argue that we need to panic if we hit any IO errors at this point to escalate to an operator. We could add some safeguard/upper bound, but I'm honestly not sure what it would protect against.
Heh, well, if we assume the attacker has write access to our |
🔔 1st Reminder Hey @TheBlueMatt! This PR has been waiting for your review. |
We add simple `persist` call to `LSPS2ServiceHandler` that sequentially persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `LSPS5ServiceHandler` that sequentially persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `EventQueue` that persists it under a `event_queue` key.
.. this is likely only temporary necessary as we can drop our own `dummy_waker` implementation once we bump MSRV.
We read any previously-persisted state upon construction of `LiquidityManager`.
We read any previously-persisted state upon construction of `LiquidityManager`.
We read any previously-persisted state upon construction of `LiquidityManager`.
dd43edc
to
f73146b
Compare
entropy_source: ES, node_signer: NS, channel_manager: CM, chain_source: Option<C>, | ||
chain_params: Option<ChainParameters>, service_config: Option<LiquidityServiceConfig>, | ||
chain_params: Option<ChainParameters>, kv_store: Arc<dyn KVStore + Send + Sync>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does the KVStore need to be dyn or an Arc?
/// Wraps [`LiquidityManager::new`]. | ||
pub fn new( | ||
entropy_source: ES, node_signer: NS, channel_manager: CM, chain_source: Option<C>, | ||
chain_params: Option<ChainParameters>, kv_store_sync: Arc<dyn KVStoreSync + Send + Sync>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I see no reason why KVStoreSync
needs to be dyn or an Arc? I also don't see why it needs to be Send
+ Sync
?
@@ -45,6 +46,10 @@ pub struct LSPS2GetInfoRequest { | |||
pub token: Option<String>, | |||
} | |||
|
|||
impl_writeable_tlv_based!(LSPS2GetInfoRequest, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to have two ways to serialize all these types? Wouldn't it make more sense to just use the serde
serialization we already have and wrap that so that it can't all be misused?
) -> Pin<Box<dyn Future<Output = Result<(), lightning::io::Error>> + Send>> { | ||
let outer_state_lock = self.per_peer_state.read().unwrap(); | ||
let mut futures = Vec::new(); | ||
for (counterparty_node_id, peer_state) in outer_state_lock.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh? Why would we ever want to do a single huge persist pass and write every peer's state at once? Shouldn't we be doing this iteratively? Same applies in the LSPS2 service.
This is the second PR in a series of PRs adding persistence to
lightning-liquidity
(see #4058). As this is already >1000LoC, I now decided to put this up as an intermediary step instead of adding everything in one go.In this PR we add the serialization logic for for the LSPS2 and LSPS5 service handlers as well as for the event queue. We also have
LiquidityManager
take aKVStore
towards which it persists the respetive peer states keyed by the counterparty's node id.LiquidityManager::new
now also deserializes any previously-persisted state from that givenKVStore
. Note that so far we don't actually persist anything, as wiring upBackgroundProcessor
to drive persistence will be part of the next PR (which will also make further optimizations, such as only persisting when needed, and persisting some imporant things in-line).This also adds a bunch of boilerplate to account for both
KVStore
andKVStoreSync
variants, following the approach we previously took withOutputSweeper
etc.cc @martinsaposnic