Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frame-based onion payload format and construction for larger per-hop payloads #586

cdecker opened this Issue Mar 4, 2019 · 0 comments


None yet
2 participants
Copy link

cdecker commented Mar 4, 2019

Quoting from the mailing list:

During the spec meeting in Adelaide we decided that we'd like to extend
our current onion-routing capabilities with a couple of new features,
such as rendez-vous routing, spontaneous payments, multi-part payments,
etc. These features rely on two changes to the current onion format:
bigger per-hop payloads (in the form of multi-frame payloads) and a more
modern encoding (given by the TLV encoding).

In the following I will explain my proposal on how to extend the per-hop
payload from the current 65 bytes (which include realm and HMAC) to

Until now we had a 1-to-1 relationship between a 65 byte segment of
payload and a hop in the route. Since this is no longer the case, I
propose we call the 65 byte segment a frame, to differentiate it from a
hop in the route, hence the name multi-frame onion. The creation and
decoding process doesn't really change at all, only some of the

When constructing the onion, the sender currently always right-shifts by
a single 65 byte frame, serializes the payload, and encrypts using the
ChaCha20 stream. In parallel it also generates the fillers (basically 0s
that get appended and encrypted by the processing nodes, in order to get
matching HMACs), these are also shifted by a single 65 byte frame on
each hop. The change in the generation comes in the form of variable
shifts for both the payload serialization and filler generation,
depending on the payload size. So if the payload fits into 32 bytes
nothing changes, if the payload is bigger, we just use additional frames
until it fits. The payload is padded with 0s, the HMAC remains as the
last 32 bytes of the payload, and the realm stays at the first
byte. This gives us

payload_size = num_frames * 65 byte - 1 byte (realm) - 32 bytes (hmac)

The realm byte encodes both the payload format as well as how many
additional frames were used to encode the payload. The MSB 4 bits encode
the number of frames used, while the 4 LSB bits encode the realm/payload

The decoding of an onion packet pretty much stays the same, the
receiving node generates the shared secret, then generates the ChaCha20
stream, and decrypts the packet (and additional padding that matches the
filler the sender generated for HMACs). It can then read the realm byte,
and knows how many frames to read, and how many frames it needs to left-
shift in order to derive the next onion.

This is a competing proposal with the proposal by roasbeef on the
lightning-onion repo [1], but I think it is superior in a number of
ways. The major advantage of this proposal is that the payload is in one
contiguous memory region after the decryption, avoiding re-assembly of
multiple parts and allowing zero-copy processing of the data. It also
avoids multiple decryption steps, and does not waste space on multiple,
useless, HMACs. I also believe that this proposal is simpler than [1],
since it doesn't require re-assembly, and creates a clear distinction
between payload units and hops.

To show that this proposal actually works, and is rather simple, I went
ahead and implemented it for c-lightning [2] and lnd [3] (sorry ACINQ,
my scala is not sufficient to implement if for eclair). Most of the code
changes are preparation for variable size payloads alongside the legacy
v0 payloads we used so far, the relevant commits that actually change
the generation of the onion are [4] and [5] for c-lightning and lnd

I'm hoping that this proposal proves to be useful, and that you agree
about the advantages I outlined above. I'd also like to mention that,
while this is working, I'm open to suggestions :-)


[1] lightningnetwork/lightning-onion#31
[2] ElementsProject/lightning#2363
[3] lightningnetwork/lightning-onion#33
[4] ElementsProject/lightning@aac29da
[5] lightningnetwork/lightning-onion@216c09c

@cdecker cdecker added the 2019-03-04 label Mar 4, 2019

@cdecker cdecker self-assigned this Mar 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.