Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LSPS2: JIT Channels #11

Closed

Conversation

johncantrell97
Copy link
Contributor

This implements LSPS2: JIT Channels on top of LSPS0 transport layer.

@johncantrell97 johncantrell97 force-pushed the jit-channels branch 4 times, most recently from 419f445 to 31c650c Compare July 24, 2023 00:35
@johncantrell97 johncantrell97 force-pushed the jit-channels branch 2 times, most recently from 31c650c to b58f614 Compare July 26, 2023 19:22
@tnull
Copy link
Collaborator

tnull commented Aug 16, 2023

@johncantrell97 Do you mind rebasing this? Will give it a proper review round afterwards, let's land this initial draft soon then to have a common ground to iterate on.

@johncantrell97
Copy link
Contributor Author

@johncantrell97 Do you mind rebasing this? Will give it a proper review round afterwards, let's land this initial draft soon then to have a common ground to iterate on.

Will do shortly. Have a few changes from implementing this into Mutiny I want to get in.

src/events.rs Outdated Show resolved Hide resolved
src/jit_channel/channel_manager.rs Outdated Show resolved Hide resolved
pending_messages: Arc<Mutex<Vec<(PublicKey, LSPSMessage)>>>,
pending_events: Arc<EventQueue>,
per_peer_state: RwLock<HashMap<PublicKey, Mutex<PeerState>>>,
channels_by_scid: RwLock<HashMap<u64, JITChannel>>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused: which channels are owned here and which ones are owned by the PeerState? Why do we have two objects holding JITChannel objects?

&self, counterparty_node_id: PublicKey, payment_size_msat: Option<u64>,
token: Option<String>, user_channel_id: u128,
) {
let channel_id = self.generate_channel_id();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really confusing as these are not 'regular'/BOLT channel_ids. Can we name them differently here and elsewhere to avoid confusion?

token,
);

let mut per_peer_state = self.per_peer_state.write().unwrap();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Could be a bit more readable to refer to these as outer_state_lock/inner_state_lock?

}

impl RawOpeningFeeParams {
pub(crate) fn into_opening_fee_params(self, promise_secret: &[u8; 32]) -> OpeningFeeParams {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mh, it's kind of weird that we do the HMAC stuff at two different places. Couldn't we DRY this up by having the HMAC calculation be a helper function and to verify just compare that with the promise we got?

src/jit_channel/msgs.rs Outdated Show resolved Hide resolved
src/jit_channel/msgs.rs Outdated Show resolved Hide resolved
src/transport/msgs.rs Outdated Show resolved Hide resolved
match peer_state.pending_requests.remove(&request_id) {
Some(Request::Buy(buy_request)) => {
let mut channels_by_scid = self.channels_by_scid.write().unwrap();
channels_by_scid.insert(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We insert here, but never remove this channel state? How does it end up in PeerState::channels_by_id again, if this is the plan?

Copy link
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice refactor! Did a high-level pass.

}

struct JITChannel {
impl JITChannelState {
fn versions_received(&self, versions: Vec<u16>) -> Self {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than having a dedicated failed state, should we have these state transitions methods return Result<Self, JITChannelError> and bubble up handleable/loggable errors? Also/alternatively, we could enforce that the most relevant state values are moved by taking self.

@@ -43,17 +40,236 @@ use super::msgs::{

const SUPPORTED_SPEC_VERSION: u16 = 1;

#[derive(PartialEq)]
#[derive(PartialEq, Debug)]
enum JITChannelState {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the other one is Outbound, we should rename this InboundJITChannelState

@@ -62,67 +278,111 @@ struct JITChannel {
min_payment_size_msat: Option<u64>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to group all the user-settable parameters into some kind of InboundJITChannelConfig object (and corresponding OutboundJITChannelConfig)?

}
}

pub fn htlc_intercepted(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think none of the state-transition functions should be pub?

}

#[derive(Default)]
struct PeerState {
channels_by_id: HashMap<u128, JITChannel>,
inbound_channels_by_id: HashMap<u128, InboundJITChannel>,
outbound_channels_by_scid: HashMap<u64, OutboundJITChannel>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this u64 and the other one u128?

}

#[derive(Default)]
struct PeerState {
channels_by_id: HashMap<u128, JITChannel>,
inbound_channels_by_id: HashMap<u128, InboundJITChannel>,
outbound_channels_by_scid: HashMap<u64, OutboundJITChannel>,
request_to_cid: HashMap<RequestId, u128>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need such a map at all, don't we need one for each of inbound/outbound? Or is it just needed for inbound channels? If so, can we include inbound in the field name?

}

#[derive(Default)]
struct PeerState {
channels_by_id: HashMap<u128, JITChannel>,
inbound_channels_by_id: HashMap<u128, InboundJITChannel>,
outbound_channels_by_scid: HashMap<u64, OutboundJITChannel>,
request_to_cid: HashMap<RequestId, u128>,
pending_requests: HashMap<RequestId, Request>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are just inbound, right? Would be nice to now include inbound/outbound in such state fields to discern what they belong to.

request_to_cid: HashMap<RequestId, u128>,
pending_requests: HashMap<RequestId, Request>,
}

impl PeerState {
pub fn insert_channel(&mut self, channel_id: u128, channel: JITChannel) {
self.channels_by_id.insert(channel_id, channel);
pub fn insert_inbound_channel(&mut self, jit_channel_id: u128, channel: InboundJITChannel) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add_? Also, can leave out the superfluousjit_channel_id and just use channel.id which would prevent we'd ever insert a channel under the wrong id.

peer_manager: Mutex::new(None),
channel_manager,
}
}

fn map_scid_to_peer(&self, scid: u64, counterparty_node_id: PublicKey) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure this helper methods with less than 5 LoC improve readability. Maybe just inline it?

@tnull tnull mentioned this pull request Sep 28, 2023
@tnull
Copy link
Collaborator

tnull commented Oct 5, 2023

Closing as superseded by #20.

@tnull tnull closed this Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants