Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for phantom node payments #1199

Merged
merged 11 commits into from
Feb 15, 2022

Conversation

valentinewallace
Copy link
Contributor

@valentinewallace valentinewallace commented Dec 1, 2021

Based on #1177

Some open questions left as TODOs

A phantom node payment is a payment made to a phantom invoice, which is an invoice that can be
paid to one of multiple nodes. This works because we encode the invoice route hints such that
LDK will recognize an incoming payment as destined for a phantom node, and collect the payment
itself without ever needing to forward to this fake node.

Phantom node payments are useful for load balancing between multiple LDK nodes. They also
provide some fault tolerance, because payers will automatically retry paying other provided
nodes in the case that one node goes down.

Note that multi-path payments are not supported in phantom invoices for security reasons.

@jkczyz
Copy link
Contributor

jkczyz commented Dec 15, 2021

At a high-level, could you describe what is meant by "multi-node receive" and what properties it is meant to achieve vs a single node?

@valentinewallace
Copy link
Contributor Author

Will push docs updates and some more tests next, wanted to get the code out there.

@valentinewallace valentinewallace force-pushed the 2021-11-phantom-node branch 2 times, most recently from 8aa6743 to d178795 Compare January 5, 2022 00:17
@valentinewallace
Copy link
Contributor Author

At a high-level, could you describe what is meant by "multi-node receive" and what properties it is meant to achieve vs a single node?

Updated the docs, hopefully this is answered!

@valentinewallace valentinewallace marked this pull request as ready for review January 5, 2022 00:34
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One high level comment.

lightning/src/ln/onion_utils.rs Outdated Show resolved Hide resolved
amt_to_forward: next_hop_data.amt_to_forward,
outgoing_cltv_value: next_hop_data.outgoing_cltv_value,
})
let phantom_secret_res = self.keys_manager.get_phantom_secret(short_channel_id);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmmmmm, I do wonder if we should care about "any third party can identify if a hop is a phantom node or not". If its too hard to do, we can probably call it a wash and not bother, but I'd ideally like to at least try to obfuscate it first. I think it should be pretty reasonable here - instead of decoding the second onion here immediately we instead push the HTLC through PendingHTLCStatus::Forward first, which means we wait until we've gotten it fully committed in the channel and then do this second-step decode in process_pending_htlc_forwards where we look up the SCID in the loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added passing it through process_pending_htlc_forwards rather than decoding the second onion immediately.

"any third party can identify if a hop is a phantom node or not"

To be clear, this is a timing attack?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear, this is a timing attack?

Yes. Hopefully pushing it through the normal forward-batching, in theory, implies timing is similar across forward-one-hop's and no-forward, but its probably still wide enough just with the skipping of network RTTs and disk updates that we should tell users they can expect a counterparty to detect the difference.

@valentinewallace valentinewallace force-pushed the 2021-11-phantom-node branch 2 times, most recently from 980b1c9 to 2c4a6df Compare January 11, 2022 17:41
@@ -1094,6 +1131,11 @@ impl KeysInterface for KeysManager {
fn sign_invoice(&self, invoice_preimage: Vec<u8>) -> Result<RecoverableSignature, ()> {
Ok(self.secp_ctx.sign_recoverable(&hash_to_message!(&Sha256::hash(&invoice_preimage)), &self.get_node_secret()))
}

fn get_phantom_secret(&self, _scid: u64) -> Result<SecretKey, ()> {
let hkdf = hkdf_extract_expand(b"LDK Phantom Node Secret Key Expansion", &self.inbound_payment_key.0, 1);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it the idea to later include the fake scid as a term in the generation of the phantom node key?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idea is that it's up to the implementer of KeysInterface. Adding the scid parameter adds the flexibility if they want to do it that way

base_msat: 0,
proportional_millionths: 0,
},
cltv_expiry_delta: MIN_CLTV_EXPIRY_DELTA,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this right, the phantom hop will increase the total time lock across the whole route. Is that necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. One priority is that we don't want to make it obvious at-a-glance that an invoice is for a phantom node, so this does help with that I think

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few quick comments will continue review later.

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
let scid_namespace = self.as_u8();
let rand_bytes = keys_manager.get_secure_random_bytes();

let valid_block_range = highest_seen_blockheight - segwit_activation_height(genesis_hash);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should include block heights all the way up to the current height - I believe there's a race for users connecting blocks via Access - they can tell us about a new block before telling us about all the transactions in the block. Technically we don't have any constraints on the user at all (ie they could provide the transactions 1000 blocks later), but in practice its probably fine to just subtract, like, a day (1008) and assume we won't select a fake SCID for a channel that we haven't seen the confirmation for yet.

});
if phantom_scid_namespace.is_none() {
phantom_scid_namespace = Some(fake_scid::Namespace::phantom(&args.keys_manager));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we store a u8 offset instead of picking a value for phantoms specifically. Instead of storing a value for each namespace, we store one "offset" and then when we want to know a namespace's value, we just add the offset and mod by MAX_NAMESPACES.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean we should store the 32-byte random value that we got the namespace from (in fake_scid::Namespace::phantom), and use it for all namespaces?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should just store the 3-bit modulo value, no? Like, instead of phantom namespace being a specific value, we define phantom to be 0 and others to increment by 1, with a random offset for the whole thing (mod 8)

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All quite straightforward now, I love this.

fn get_inbound_payment_key_material(&self) -> KeyMaterial;

/// Get the phantom node secret key for use in receiving a [phantom node payment].
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this a bit confusing, can we explicitly say "aka node_id or network_key" like we do for get_node_secret and describe how this is the secret key which signs invoices?

lightning/src/chain/keysinterface.rs Outdated Show resolved Hide resolved
let mut inbound_pmt_key_bytes = [0; 32];
inbound_pmt_key_bytes.copy_from_slice(&inbound_payment_key[..]);
let inbound_payment_key = if let Some(key) = inbound_payment_key_opt {
key
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bleh, quite the layer violation - if we're returning the key material to ChannelManager as-is and letting ChannelManager do derivation magic on it, we should definitely not be doing some derivation on it ourselves. Instead, lets pass in a cross_node_seed or something that is then derived into both a inbound_payment_key and a phantom_secret directly. This avoids any (ok, admittedly far-fetched, but still) use of the same derivation in ChannelManager leading to key re-use (eg if ChannelManager decides to derive a key called "LDK Phandom Node Secret Key Expansion).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, seems like it may be awkward to change the derivation of the inbound_payment_key given it's already used in the wild. Is it fine if some payments fail due to this change?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right, hmmmmm, no that wouldn't be great. Maybe we should separate out how it works based on the constructor used? For the new constructor we can just refuse to generate phantom secret?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented this

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
///
/// `channels` contains a list of tuples of `(phantom_node_scid, real_node_pubkey,
/// real_channel_details)`, where `phantom_node_scid` is unique to each `channels` entry and should
/// be retrieved freshly for each new invoice from [`ChannelManager::get_phantom_scid`].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So let's say there are 3 nodes that would like to accept a single invoice, each time create_phantom_invoice (on a single node) is called, would the caller need to ping the other two nodes to get fresh phantom scids + usable channels to include in this channels list? So channel list would be a concatenated list of channels from the three nodes? If each of the 3 nodes had 2 open channels, the list size could be a size of 6?

Trying to make this as clear as possible:

/// `channels` contains a list of tuples of `(phantom_node_scid, real_node_pubkey,
/// real_channel_details)`, where `phantom_node_scid` is unique to each `channels` entry and should
/// be retrieved freshly for each new invoice from [`ChannelManager::get_phantom_scid`].
/// `real_channel_details` comes from [`ChannelManager::list_channels`]. For multiple nodes,
/// `channels` will be a concatenated list of entries from each node, as each node must provide a fresh
/// `phantom_scid` and `real_channel_details` for all usable channels on each new invoice generation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, sorry the docs are pretty WIP. Will prioritize shoring them up.

each time create_phantom_invoice (on a single node) is called, would the caller need to ping the other two nodes to get fresh phantom scids + usable channels to include in this channels list?

You don't have to retrieve fresh phantom scids on each invoice. Doing so provides more privacy, but it's optional! Fine to cache them.

If some of the provided real channels aren't usable, that could obviously result in failed payment attempts, but since channels don't close too frequently it should be fine to just poll and cache, not fetch freshly on each invoice.

So channel list would be a concatenated list of channels from the three nodes? If each of the 3 nodes had 2 open channels, the list size could be a size of 6?

Yup! The parameters could probably be a little cleaner (suggestions welcome here) though somewhat limited by our bindings generator

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine to cache them

sweet, good to know!

Yup! The parameters could probably be a little cleaner (suggestions welcome here) though somewhat limited by our bindings generator

List of tuples makes sense to me. I think if we clarify what can be cached + the fact that the list of tuples can be formed from multiple nodes that would be good.

Maybe also a warning about including too many channels, as that might make the invoice string too long for QR codes?

@codecov-commenter
Copy link

codecov-commenter commented Jan 21, 2022

Codecov Report

Merging #1199 (710954f) into main (963f8d9) will increase coverage by 0.00%.
The diff coverage is 88.46%.

Impacted file tree graph

@@           Coverage Diff            @@
##             main    #1199    +/-   ##
========================================
  Coverage   90.51%   90.51%            
========================================
  Files          71       72     +1     
  Lines       39210    39560   +350     
========================================
+ Hits        35490    35809   +319     
- Misses       3720     3751    +31     
Impacted Files Coverage Δ
lightning-invoice/src/lib.rs 88.25% <0.00%> (+0.83%) ⬆️
lightning/src/ln/channel.rs 89.33% <0.00%> (ø)
lightning/src/chain/keysinterface.rs 93.58% <77.35%> (-2.29%) ⬇️
lightning/src/util/test_utils.rs 82.44% <83.33%> (+0.18%) ⬆️
lightning/src/ln/channelmanager.rs 84.45% <84.24%> (-0.14%) ⬇️
lightning-invoice/src/utils.rs 88.14% <91.48%> (+4.07%) ⬆️
lightning/src/ln/onion_utils.rs 95.33% <92.30%> (-0.31%) ⬇️
lightning/src/util/scid_utils.rs 98.37% <97.87%> (-1.63%) ⬇️
lightning-background-processor/src/lib.rs 93.04% <100.00%> (ø)
lightning/src/ln/functional_test_utils.rs 95.23% <100.00%> (ø)
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 963f8d9...710954f. Read the comment docs.

_ => panic!("Unexpected event"),
}
match events[1] {
Event::PaymentReceived { payment_hash: ref hash, ref purpose, amt } => {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way this is set up, does that work too for multi-part payments to a phantom invoice where the parts come in via distinct real nodes?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. In theory we could adapt it to work with users doing the work to claim across nodes, but I'm not sure what the use-case there is? All the nice "node offline -> seamless fallback to other nodes" stuff suddenly doesn't work. You still get shared liquidity, but that feels like it could just as easily be done with a single node at that point, no?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Senders are free to choose routes right? So they may decide to use a different final hop towards the phantom node for some of the parts. I don't think you can force them not to, and they also won't be aware that they shouldn't.

One thing that I can think of is when node A doesn't have enough (cheap) liquidity and the sender decides to split the payment to also use up some of node B's liquidity.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative design is to make the phantom node a stand-alone process that communicates with each of the gateway nodes and is backed by a replicated database?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you can't force them not to, but you can disable MPP, which accomplishes the same :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sender may be short on liquidity towards their peers and might only be able to complete the payment via MPP.

If you run a business that is big enough to require a multi-node setup, I'd think it is undesirable to force all customers to pay in single shots only?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends entirely on the amount being sent. Indeed, it may not be desirable to use phantom nodes for large value payments, but public stats on lightning usage seem to point to very small payments being the norm, which I'd think are more than fine. Further, it depends a lot on the user - if someone is running a full-fledged node they may have less liquidity in individual channels, but presumably that will be less and less common as mobile-lightning-against-LSP ticks up, where you have much less of a concern given you usually just have one channel in that model. Finally, future lightning with trampoline will allow intermediate nodes to split payments, with them being recombined before they reach the destination, at least in theory, so it becomes even less of a concern if we get there.

lightning-invoice/src/utils.rs Show resolved Hide resolved
lightning-invoice/src/utils.rs Outdated Show resolved Hide resolved
lightning/src/chain/keysinterface.rs Outdated Show resolved Hide resolved
lightning/src/chain/keysinterface.rs Outdated Show resolved Hide resolved
_ => panic!("Unexpected event"),
}
match events[1] {
Event::PaymentReceived { payment_hash: ref hash, ref purpose, amt } => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends entirely on the amount being sent. Indeed, it may not be desirable to use phantom nodes for large value payments, but public stats on lightning usage seem to point to very small payments being the norm, which I'd think are more than fine. Further, it depends a lot on the user - if someone is running a full-fledged node they may have less liquidity in individual channels, but presumably that will be less and less common as mobile-lightning-against-LSP ticks up, where you have much less of a concern given you usually just have one channel in that model. Finally, future lightning with trampoline will allow intermediate nodes to split payments, with them being recombined before they reach the destination, at least in theory, so it becomes even less of a concern if we get there.

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning-invoice/src/utils.rs Outdated Show resolved Hide resolved
@TheBlueMatt
Copy link
Collaborator

Needs rebase.

@TheBlueMatt
Copy link
Collaborator

I'm happy to see this squashed and do one more once-over, @jkczyz?

@valentinewallace valentinewallace force-pushed the 2021-11-phantom-node branch 2 times, most recently from e9cc2b1 to 9ba0cc8 Compare February 14, 2022 18:21
…add_htlc

This will be used in upcoming commit(s) to facilitate decoding multiple onion layers for
multi-node payment receive
Will be used in upcoming commit(s) where it may be desirable to cache ChannelDetails routehints
To support the feature of generating invoices that can be paid to any of
multiple nodes, a key manager need to be able to share an inbound_payment_key
and phantom secret key. This is because a phantom payment may be received by
any node participating in the invoice, so all nodes must be able to decrypt the
phantom payment (and therefore must share decryption key(s)) in the act of
pretending to be the phantom node. Thus we add a new `PhantomKeysManager` that
supports these features.

To be more specific, the inbound payment key must be shared because it is used
to decrypt the payment details for verification (LDK avoids storing inbound
payment data by encrypting payment metadata in the payment hash and/or payment
secret).

The phantom secret must be shared because enables any real node included in the
phantom invoice to decrypt the final layer of the onion packet, since the onion
is encrypted by the sender using the phantom public key provided in the
invoice.
We want LDK to be able to retrieve the phantom secret key when we see that a payment
is destined for a phantom node.
Will be used to facilitate decoding multiple onion layers for phantom payment receive
…cid module

See method and module docs for more details
See PhantomKeysManager and invoice util's create_phantom_invoice for more info
This commit also adds additional checks for the second-to-last (phantom) hop for phantom payments.
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two tiny nits that you don't need to bother to fix if you prefer not to, and one actual issue (ugh, sorry), which also should get a test or five (presumably in lightning/src/ln/onion_route_tests or so).

///
/// [phantom node payments]: crate::chain::keysinterface::PhantomKeysManager
pub fn get_phantom_scid(&self) -> u64 {
let mut channel_state = self.channel_state.lock().unwrap();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'd prefer to push down locks in this kinda situation - the loop should only run once so we're not rapidly relocking, and the top of the loop is a crypto operation (even if a cheap one) now, so its not completely free and we're needlessly holding a lock in it.

fail_forward!(err_msg, err_code, Vec::new());
},
Err(onion_utils::OnionDecodeErr::Relay { err_msg, err_code }) => {
fail_forward!(err_msg, err_code, Vec::new());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, we need to handle this and the above case differently. I think the above case may be correct, but here we may end up failing due to a "final node" error but encrypting it as if it came from the second-to-last-hop, which the sender may treat as a bogus failure. Here we should onion_utils::encrypt_failure_package(onion_utils::build_failure_packet(..)) and then set the forwarding reason to HTLCFailReason::LightningError. We also should have a test for "tried to send something bogus, eg wrong amount, to a phantom node, and got back a failure that we decrypted correctly".

Copy link
Contributor Author

@valentinewallace valentinewallace Feb 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sg. I assume we'll want to handle the case 10 lines down (where construct_recv_pending_htlc_info errors) similarly as well

Edit: oops the next comment literally addresses this, disregard

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the above case may be correct,

Hmm, I think this case needs to use the phantom shared secret as well? Since we're pretending the phantom is the one creating this error

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think this case needs to use the phantom shared secret as well? Since we're pretending the phantom is the one creating this error

Yea, we'll want to encrypt it twice, but we already do the second one, so I think we just need to encrypt here with the phantom shared secret. Tests should show up what's up, though.

onion_utils::Hop::Receive(hop_data) => {
match self.construct_recv_pending_htlc_info(hop_data, shared_secret, payment_hash, amt_to_forward, outgoing_cltv_value) {
Ok(info) => phantom_receives.push((prev_short_channel_id, prev_funding_outpoint, vec![(info, prev_htlc_id)])),
Err(ReceiveError { err_code, err_data, msg }) => fail_forward!(msg, err_code, err_data)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing major on my end. Think we're close other than what @TheBlueMatt mentioned.

@@ -390,6 +390,7 @@ pub fn do_test(data: &[u8], logger: &Arc<dyn Logger>) {
best_block: BestBlock::from_genesis(network),
};
let channelmanager = Arc::new(ChannelManager::new(fee_est.clone(), monitor.clone(), broadcast.clone(), Arc::clone(&logger), keys_manager.clone(), config, params));
keys_manager.counter.fetch_sub(1, Ordering::AcqRel);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment here would be useful for posterity.

let rand_bytes = keys_manager.get_secure_random_bytes();

let segwit_activation_height = segwit_activation_height(genesis_hash);
let mut valid_block_range = if highest_seen_blockheight > segwit_activation_height {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Little confused by the use of "range" in this variable name. Wondering if this instead could be:

let mut blocks_since_segwit_activation = highest_seen_blockheight.saturating_sub(segwit_activation_height);

// ...

let fake_scid_height = segwit_activation_height + rand_for_height % (blocks_since_segwit_activation + 1);

    

Comment on lines +153 to +154
if BlockHash::from_hex(MAINNET_GENESIS_STR).unwrap() == *genesis {
MAINNET_SEGWIT_ACTIVATION_HEIGHT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we care if this isn't accurate for other Networks? Guessing no.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worst case it means (very marginally) less privacy, so dunno why we would, really.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also fine landing this as-is and fixing error returning in a followup. What do you think @jkczyz?

@jkczyz
Copy link
Contributor

jkczyz commented Feb 15, 2022

I'm also fine landing this as-is and fixing error returning in a followup. What do you think @jkczyz?

SGTM. Generally, I'd prefer to land everything together. But merging and reviewing a fresh PR has its advantages to avoid blocking others or holding things up, etc. So happy to have the fix in a follow-up before releasing.

@TheBlueMatt
Copy link
Collaborator

Tracking followups at #1308, so that we can get the bulk of this landed.

@jkczyz jkczyz merged commit 8e7f241 into lightningdevkit:main Feb 15, 2022
@jkczyz
Copy link
Contributor

jkczyz commented Feb 15, 2022

@valentinewallace Would you mind updating the PR description for posterity, especially if this gets some more visibility in the near future. Copy-paste summarizing the feature from comments / design doc / commit message should be fine.

@valentinewallace valentinewallace changed the title Add support for multi-node receive Add support for phantom node payments Feb 19, 2022
valentinewallace added a commit to valentinewallace/rust-lightning that referenced this pull request Feb 19, 2022
valentinewallace added a commit to valentinewallace/rust-lightning that referenced this pull request Feb 19, 2022
valentinewallace added a commit to valentinewallace/rust-lightning that referenced this pull request Feb 19, 2022
valentinewallace added a commit to valentinewallace/rust-lightning that referenced this pull request Feb 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants