Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phase 0 Networking Specifications #763

Merged
merged 11 commits into from Mar 28, 2019

Conversation

@mslipper
Copy link
Contributor

commented Mar 12, 2019

Supplants #692. Notable changes:

  • Specs are split into messaging, identity, and RPC documents. I've left out broadcast for now so that we can discuss that separately.
  • The RPC protocol uses something more akin to JSON-RPC on the wire in order to identify individual methods to call rather than using a separate libp2p protocol name for each. This fits better with libp2p's stream selection logic.

@mslipper mslipper changed the title Add networking specs Phase 0 Networking Specifications Mar 12, 2019

@mslipper mslipper referenced this pull request Mar 12, 2019

Closed

Phase 0 Wire Protocol #692

@jannikluhn
Copy link
Contributor

left a comment

👍 much easier to discuss specifics in a PR than in an issue

I'm about half way through, will continue the review tomorrow.

Show resolved Hide resolved specs/networking/messaging.md Outdated
@@ -0,0 +1,32 @@
ETH 2.0 Networking Spec - Node Identification

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 12, 2019

Contributor

It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 13, 2019

Author Contributor

Cool, will remove.

This comment has been minimized.

Copy link
@raulk

raulk Mar 13, 2019

Contributor

Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?

This comment has been minimized.

Copy link
@djrtwo

djrtwo Mar 15, 2019

Contributor

cc: @fjl

Show resolved Hide resolved specs/networking/rpc-interface.md Outdated

## RPC-Over-`libp2p`

To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure:

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 12, 2019

Contributor

Should we add beacon somewhere in the protocol path? I think this might be useful to distinguish between shard and beacon RPC commands.

)
```

If an error occurs, a variant of the response structure is returned:

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 12, 2019

Contributor

At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:

Response: (
    id: uint64
    status_code: uint64
    data: bytes
)

SuccessData: (
...
)

ErrorData (
...
)

Not really elegant, but I don't really see a better solution (for SSZ that is).

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 13, 2019

Author Contributor

Ah, this is a good point. SSZ doesn't support null values either - let me think on this one for a little bit and come up with a solution.

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 18, 2019

Author Contributor

Added an is_error boolean field. Note that with SSZ at least you can read the is_error field prior to the contents of the result via offsets. This allows clients to switch the deserialized type based on the is_error value.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

the alternative would be to use a list - empty if there's no error, and one item if there is.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if is_error is false, data must contain bytes. embedding a StatusData in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.

That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.

Show resolved Hide resolved specs/networking/rpc-interface.md

The "method ID" fields in the below messages refer to the `method` field in the request structure above.

The first 1,000 values in `error.code` are reserved for system use. The following error codes are predefined:

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 12, 2019

Contributor

I like the error codes, this seems very useful (e.g. for block not found or something). Not sure about the examples below though, shouldn't 0, 10, and 20 just result in a disconnect?

This comment has been minimized.

Copy link
@notasecret

notasecret Mar 18, 2019

sort of like port numbers :)


Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 13, 2019

Contributor

You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.

This comment has been minimized.

Copy link
@FrankSzendzielarz

FrankSzendzielarz Mar 13, 2019

Member

NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).


For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique.

The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack.

This comment has been minimized.

Copy link
@FrankSzendzielarz

FrankSzendzielarz Mar 13, 2019

Member

One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.

This comment has been minimized.

Copy link
@raulk

raulk Mar 13, 2019

Contributor

libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 19, 2019

Author Contributor

Wouldn't this be handled at the libp2p layer? Here we're describing how to construct a multiaddr from an ENR; the actual handling of the multiaddr itself and the underlying hash construction would be the responsibiliy of libp2p.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 19, 2019

Contributor

it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.

Does using ENR require decoding RLP in this context?

Show resolved Hide resolved specs/networking/messaging.md Outdated

Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.

This comment has been minimized.

Copy link
@FrankSzendzielarz

FrankSzendzielarz Mar 13, 2019

Member

NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?


Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).

Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
2. If the `latest_finalized_root` shared by the peer is not in the client's chain at the expected epoch. For example, if Peer 1 in the diagram below has `(root, epoch)` of `(A, 5)` and Peer 2 has `(B, 3)`, Peer 1 would disconnect because it knows that `B` is not the root in their chain at epoch 3:

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 14, 2019

Author Contributor

Cool got it, will do.

Show resolved Hide resolved specs/networking/rpc-interface.md Outdated

- `1`: Client shut down.
- `2`: Irrelevant network.
- `3`: Irrelevant shard.

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

This deals with the beacon chain only, so there are no shards. I think we should have a completely separate protocol and separate connections for shard networks.

)
```

Client MAY send `goodbye` messages upon disconnection. The reason field MUST be one of the following values:

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

Some more from the top of my head that might be helpful:

- too many peers
- not helpful
- malicious/faulty
- wrong chain
- sync finished (?)

This comment has been minimized.

Copy link
@nisdas

nisdas Mar 19, 2019

Contributor

shouldn't we still be connected after sync finished ? We would still need to propagate any newly proposed blocks to our peers

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 19, 2019

Contributor

generally, the standard way to sync in these kinds of "live" protocols is to start listening to broadcasts, then initiate sync.. else you'll miss packets during sync and will have to recover again.

Show resolved Hide resolved specs/networking/rpc-interface.md Outdated
Show resolved Hide resolved specs/networking/rpc-interface.md Outdated

```
(
start_root: HashTreeRoot

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 13, 2019

Contributor

Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).

I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a block_root+previous_n_blocks kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 18, 2019

Contributor

Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?

That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

more efficient

In terms of network / bandwidth, I'd say it's about the same but there are some nuances:

  • in forward sync, I can ask for "more" slots than already exist, potentially saving round trips - a client could use this to request "all" data at the time of request arrival. consider the following race: A sends request, a new block is produced, B receives request (similar: B starts sending response which takes time, new block is produced).
  • in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.
  • both above races are generally solved by collecting data from broadcasts while syncing (classic subscribe-then-sync pattern) - they are mainly concerns if you don't want to subscribe or want to delay subscribing.
  • in forward sync, I might end up on a different branch / head than I thought I would - the request itself does not point one out

In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.

The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..

attestations on top of my sync peer's head flying around the network as evidence that their head is valid

right. the assumption rests on several premises (thanks @djrtwo!):

  • honest clients will not propagate invalid data (not signed by a validator they know)
  • there's a slashing condition on creating unviable attestations - there's currently no penalty to create & sign an unviable block so one can perhaps imagine a malicious group of validators creating lots of these and for example spam everyone during important periods "for free". It sounds a bit far fetched though, tbh, to be creating blocks this way - would love to hear thoughts.
  • I've weak-subjectively selected an initial state that contains some validators. I'd primarily look for anything signed by those validators as another heuristic for where to start syncing (even if the validator set might have changed from there).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:

  • it's important that we have a request like Hello to ask for what clients consider to be the head for non-gossiping use cases - but I think that's orthogonal to the sync direction.
  • clients should be free to send that request at any time, not just during the initial negotiation phase
  • direction can be different for request and response - if different, requires a slightly "smarter" server
  • there's a cost for each direction, in terms of implementation. I'd start with one and look for strong motivations before implementing the other, as the returns are not that great. Either direction is sufficient, really.

This comment has been minimized.

Copy link
@jrhea

jrhea Mar 27, 2019

Contributor

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)

in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.

Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 28, 2019

Contributor

latest received gossiped block

or even better, latest gossiped attestation

Do we really need start_slot?

I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.

It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.

This comment has been minimized.

Copy link
@jrhea

jrhea Mar 29, 2019

Contributor

or even better, latest gossiped attestation

I can dig that

@arnetheduck or anyone else. Why do we need start_slot

Show resolved Hide resolved specs/networking/rpc-interface.md Outdated
Show resolved Hide resolved specs/networking/rpc-interface.md Outdated
@raulk
Copy link
Contributor

left a comment

A few initial thoughts.

Show resolved Hide resolved specs/networking/messaging.md Outdated
Show resolved Hide resolved specs/networking/messaging.md Outdated
Visually, a message looks like this:

```
+--------------------------+

This comment has been minimized.

Copy link
@raulk

raulk Mar 13, 2019

Contributor

If other comments are accepted, this enveloping can go away.

@@ -0,0 +1,32 @@
ETH 2.0 Networking Spec - Node Identification

This comment has been minimized.

Copy link
@raulk

raulk Mar 13, 2019

Contributor

Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?


For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique.

The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack.

This comment has been minimized.

Copy link
@raulk

raulk Mar 13, 2019

Contributor

libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.

Show resolved Hide resolved specs/networking/node-identification.md Outdated
Show resolved Hide resolved specs/networking/rpc-interface.md Outdated
Show resolved Hide resolved specs/networking/rpc-interface.md

jannikluhn and others added some commits Mar 14, 2019

Update specs/networking/rpc-interface.md
Co-Authored-By: mslipper <me@matthewslipper.com>
Update specs/networking/rpc-interface.md
Co-Authored-By: mslipper <me@matthewslipper.com>
Update specs/networking/rpc-interface.md
Co-Authored-By: mslipper <me@matthewslipper.com>
Update specs/networking/node-identification.md
Co-Authored-By: mslipper <me@matthewslipper.com>
Update specs/networking/rpc-interface.md
Co-Authored-By: mslipper <me@matthewslipper.com>

```
(
headers: []BlockHeader

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 14, 2019

Contributor

I note BlockHeader is not defined in the beacon chain spec. I opened a PR to define it as a struct.

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 14, 2019

Contributor

If it's not needed specifically for the spec, we could also just define it here.


## RPC-Over-`libp2p`

To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure:

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 14, 2019

Contributor

what's the semantic meaning of these long version numbers?

This comment has been minimized.

Copy link
@notasecret

notasecret Mar 18, 2019

i would imagine it's there because over time it will be bugfixed (bugfix version), updated with a commitment to being backward compatible (minor version), and updated with complete disregard for any backward compatibility, for the sake of progress (major version)

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 18, 2019

Author Contributor

Yes, the idea is that they follow semver. In practice I'd estimate that the only time we'd change these version numbers is if there was a backwards-incompatible change to the serialization/compression scheme.

See @raulk's point above re: message envelopes.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

A version number can either be interpreted or not. If we rely on semver, we should specify the correct behavior for clients: consider client a that supports 1.0.0 - should it also accept 1.0.1 messages as valid automatically, or discard them? This matters for forwards compatibility.

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 19, 2019

Author Contributor

Frankly, I'm in favor of simply having integer version numbers and have a blanket statement that sub-protocol version numbers are neither forwards nor backwards compatible.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 19, 2019

Contributor

that would be my preference as well, with how the encoding and protocol looks today.

if we had a serialization format that allowed forwards/backwards compatible additions (ie adding fields) we could maybe consider two-level numbering here, where the first number would be the blanket statement, while the second would signal the version with additional fields added, which would still be compatible with previous clients.

Such an encoding is generally a good thing in the wire use case, which would be a reason to look to extensions to SSZ when used outside consensus (a super-set of SSZ for example).

This comment has been minimized.

Copy link
@raulk

raulk Mar 21, 2019

Contributor

+1 on integers to signal a generation (generation 1, generation 2...). Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities? @arnetheduck

I would model serialisation format and compression as part of the protocol ID. Then allow Multistream to negotiate.

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 21, 2019

Contributor

A possible compatible change could be added message types, so I think minor version numbers could be useful in some cases.

Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities?

Capabilities are known from the discovery protocol/ENRs already (but we need to define what types of capabilities we need). So I don't think we need it in the HELLO message.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 26, 2019

Contributor

@raulk taking a step back, my initial understanding of the libp2p setup was that you would negotiate capabilities with discovery mainly and connect to clients you know you have common ground with - then merely verify the support by signing up to the various streams here and that each stream would be a protocol on its with libp2p dealing with the mulitplexing etc - that has changed now I see, and it looks like there's another layer of protocol negotiation within the stream to discover capabilities - that feels.. redundant, to do the same work twice, and somewhat limiting, because how does a client add a completely new message they want to test or use in some client-specific scenario (for example to establish / evaluate its usefulness) - but it seems I need to reread the newer spec.

I'll be honest though and say that I don't fully understand where the varint would go at this point with the various layers, but integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it. Numbers.. you need a registry and the associated maintenance.

This comment has been minimized.

Copy link
@raulk

raulk Mar 26, 2019

Contributor

@arnetheduck my intention isn't to propose any changes here; was just curious to hear the rationale of the Eth2.0 community re: finer-grained protocols vs. coarse-grained protocol with capabilities. We also debate this occasionally in the libp2p community ;-) [libp2p supports both patterns].

Re: semver vs. protocol generations. libp2p does not impose a specific version token (if any). When registering a handler, you can attach a predicate to evaluate the match. So handler X could match N versions, and when receiving the callback, you can inspect the protocol pinned on the stream to infer which abilities you activate, etc.

We've traditionally used semver it in libp2p, IPFS, etc., but a few of us are not convinced of its aptness. Semver is good to convey the severity of changes in APIs, but protocol evolution is a different beast.

You generally strive to keep things backwards compatible, yet enable feature upgrades/opt-ins over time that may not be rolling cumulative, e.g. if 1.14.0 and 1.15.0 introduce feature X and Y respectively, how do I convey that a given peer supports Y but not X?

That's where protocol granularity comes into play: potentially make each feature/message/RPC a different protocol, and track "generations/revisions" of those protocols. libp2p supports that design. A few thoughts:

  • We're evolving Multistream to avoid round trips when you're certain the other party supports a protocol (selection vs negotiation), for example, via an ENR.
  • If two messages have timing dependencies (can't send message B until after A), and they segregated across protocols, it may make state-tracking a bit more complicated.

integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it.

I meant replacing semver by protocol generations, e.g. /eth/serenity/rpc/v10. Sorry for not being clear!

@mslipper

This comment has been minimized.

Copy link
Contributor Author

commented Mar 18, 2019

We have a call tomorrow to go over the wire protocol, after which I'll update this PR with the group's decision. Ping me on Gitter if you'd like an invite to the call.

@arnetheduck

This comment has been minimized.

Copy link
Contributor

commented Mar 18, 2019

Can someone familiar with libp2p draw a diagram of all payload bytes included in a typical packet, all the way from.. transport (tcp), including envelopes, wrappers, multiplexers etc?

latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32
best_slot: uint64

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

slots are based on wall time - what's the best_slot field for?

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 18, 2019

Contributor

pretty sure this is supposed to refer to the slot of the head block. Maybe rename best_root and best_slot to head_root and head_slot (or to be even more clear head_block_root/slot)?

This comment has been minimized.

Copy link
@nisdas

nisdas Mar 19, 2019

Contributor

I think head_block_root and head_slot would be clearer

)
```

Send a list of block roots and slots to the requesting peer.

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

which beacon block roots? known heads? all?

)
```

Requests the `block_bodies` associated with the provided `block_roots` from the peer. Responses MUST return `block_roots` in the order provided in the request. If the receiver does not have a particular `block_root`, it must return a zero-value `block_body` (i.e., a `block_body` container with all zero fields).

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 18, 2019

Contributor

It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.

@zscole

This comment has been minimized.

Copy link

commented Mar 18, 2019

For the sake of expediency, we're going to be implementing a few changes in Hobbits to provide an easy and modular mechanism that allows clients to start talking to one another. For the time being, connections will be established via basic TCP with a fixed port (9000 is what we had discussed and agreed upon in the call). We'll be changing the Hobbits spec to assume the default serialization method SSZ and we're going to convert it from being a text to a binary protocol in version 0.2.

@zah

This comment has been minimized.

Copy link

commented Mar 19, 2019

I've also brought back the message envelope as discussed during the call. This should match what goes into Hobbits version 0.2. See messaging.md for details.

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

@mslipper

This comment has been minimized.

Copy link
Contributor Author

commented Mar 19, 2019

I've also brought back the message envelope as discussed during the call. This should match what goes into Hobbits version 0.2. See messaging.md for details.

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

Correct, we don't need this enveloping at all with libp2p. However some clients don't have libp2p implementations yet and specifically requested a way to communicate without it in order to test their networking stack. Thus, we need a way of representing compression and serialization that's agnostic of libp2p.

@arnetheduck

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2019

I think this might be a good time to split up this spec:

  • one that focuses on application level protocol: how to get and gossip blocks etc. Basically the SSZ-encoded RPC requests and other payloads.
  • one that focuses on mapping eth2 to a transport layer. the working theory is that this transport layer is libp2p, but it could also be a simpler clear-test testnet protocol some have asked for, or even devp2p or whatever.

then, from the application level protocol we have a list of properties our transport layer must have: version and feature negotiation, broadcast, peer-to-peer etc. for each of these, we look at how they map to the underlying transport.

In summary, we'd have several documents (or sections of appendicec etc):

  • 0-eth2-application.txt - blocks, attestations, etc
  • 0-eth2-libp2p-mapping.txt - peer id format, gossipsub vs floodsub, minimal supported encryption methods, etc - basically a description of how eth uses libp2p specifically, starting from something like ethresearch/p2p#4
  • 0-eth2-temporary-mapping.txt - temporary pre-libp2p network for those that feel this is useful.

Some of the coordination issues we see stem from it being pretty early for protocol discussions, specially cross-client ones. Splitting it up will allow us to make progress on the application-level messages without being too distracted by other ongoings.

@dreamcodez

This comment has been minimized.

Copy link

commented Mar 19, 2019

@mslipper general current critiques:

  1. regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

  2. regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

  3. regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

@mslipper re: binary protocol 0.2

  1. I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

  2. If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

  3. once again -- headers???

;)

awaits feedback in bunker

@mslipper

This comment has been minimized.

Copy link
Contributor Author

commented Mar 19, 2019

@dreamcodez

regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

Note sure what you mean by this... the 'responses' as defined here contain the result of the procedure call or an error. Are you referring to defined error codes?

regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

What do you mean by headers in this case? If you want an additional 'command' (in Hobbits parlance, anyway) you'd just need to define a new method_id. Note that the serialization of that command is handled by the message envelope right now, and by upfront negotiation in the future.

regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

Can you give an example of the type of metadata you are referring to?

I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

This would mean the introduction of a handshake process, which defeats the purpose of having a simple intermediary wire format to tide us over until libp2p is ready. You can already change the compression/encoding protocols by changing the nibble values in the message envelope.

If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

I'd honestly prefer not to do this, since defining them would create an implicit requirement within this spec to support multiple compression algorithms. I'd much rather we decide on one as a group to reduce implementation overhead. This is why 0x0 (i.e., no compression) is the only defined compression nibble at this time.

@mslipper

This comment has been minimized.

Copy link
Contributor Author

commented Mar 19, 2019

I think this might be a good time to split up this spec:

* one that focuses on application level protocol: how to get and gossip blocks etc. Basically the SSZ-encoded RPC requests and other payloads.

* one that focuses on mapping eth2 to a transport layer. the working theory is that this transport layer is libp2p, but it could also be a simpler clear-test testnet protocol some have asked for, or even devp2p or whatever.

then, from the application level protocol we have a list of properties our transport layer must have: version and feature negotiation, broadcast, peer-to-peer etc. for each of these, we look at how they map to the underlying transport.

In summary, we'd have several documents (or sections of appendicec etc):

* 0-eth2-application.txt - blocks, attestations, etc

* 0-eth2-libp2p-mapping.txt - peer id format, gossipsub vs floodsub, minimal supported encryption methods, etc - basically a description of how eth uses libp2p specifically, starting from something like [ethresearch/p2p#4](https://github.com/ethresearch/p2p/issues/4)

* 0-eth2-temporary-mapping.txt - temporary pre-libp2p network for those that feel this is useful.

Some of the coordination issues we see stem from it being pretty early for protocol discussions, specially cross-client ones. Splitting it up will allow us to make progress on the application-level messages without being too distracted by other ongoings.

@arnetheduck I really like this idea... what do you think of doing this separately from this PR? I'm sure that there will be additional feedback generated from those documents.

@FrankSzendzielarz

This comment has been minimized.

Copy link
Member

commented Mar 19, 2019

Yes. I think we need flexibility and having aspects like serialisation method separated from message format and semantics will be of more immediate utility than say wire performance.

@arnetheduck

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2019

@arnetheduck I really like this idea... what do you think of doing this separately from this PR? I'm sure that there will be additional feedback generated from those documents.

I think we're pretty close already in terms of how the spec is sectioned (nice work!) - just need to make the distinction slightly more explicit. It's mostly an administrative matter to facilitate discussion, make a dedicated space for separate non-libp2p discussion, and establish terminology once and for all (what's "wire" and "application" and so on). Can be done at any time, whenever feels convenient.

Eventually, I also hope it will lead to a more clearly documented interface between eth2 and the underlying overlay (haha).

@dreamcodez

This comment has been minimized.

Copy link

commented Mar 20, 2019

@dreamcodez

regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

Note sure what you mean by this... the 'responses' as defined here contain the result of the procedure call or an error. Are you referring to defined error codes?

Omitting generic rpc response codes will require us produce an explicitly coded 'response' message to be paired with each said request.. If message A gets sent via rpc, then its nice to not have to create a message AResponse to indicate the response permutations of 'success' and 'error' at the very least.

regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

What do you mean by headers in this case? If you want an additional 'command' (in Hobbits parlance, anyway) you'd just need to define a new method_id. Note that the serialization of that command is handled by the message envelope right now, and by upfront negotiation in the future.

Not referring to commands -- referring to pieces of metadata which our core protocol could not have possibly foreseen use cases for in the future -- for example -- proxies and/or routing topologies will need to tag extra metadata to efficiently push decisions and context to the next hop. rather than having to modify our protocol, they can just adopt new header extensions. Cache invalidation nonces are another use case.

regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

Can you give an example of the type of metadata you are referring to?

See above -- I know there are more use cases but I cannot express them at the moment. The main idea is i should be able to grab lightweight metadata when looking up objects to see if i need the full object (maybe it hasn't changed since last time I asked).

I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

This would mean the introduction of a handshake process, which defeats the purpose of having a simple intermediary wire format to tide us over until libp2p is ready. You can already change the compression/encoding protocols by changing the nibble values in the message envelope.
I disagree -- this does not force anyone to handshake in a way which would create any additional overhead -- a rpc requestor peer can simply say 'hey i'm sending this in spdy, and i can receive responses in X,Y,Z' -- if the response compression preference list gives a few choices -- this creates a high probability of interoperability -- especially if the scheme chosen in the initial request is spdy or something non-experimental.

If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

I'd honestly prefer not to do this, since defining them would create an implicit requirement within this spec to support multiple compression algorithms. I'd much rather we decide on one as a group to reduce implementation overhead. This is why 0x0 (i.e., no compression) is the only defined compression nibble at this time.

Let's say that our request looks like:
EWP 0.1 snappy gzip,snappy -- this means that I will be able to work with any endpoint which supports snappy, but i can also support gzip coming back if its available as a higher preference...
I don't really see any drawbacks with supporting this...

Last but not least I reserved codes 406, and 407 to indicate 'request compression not supported' and 'response compression not supported' respectively -- this leaves a way in the future for us to get out of the one-compression trap without having to change the protocol. After the first request, the server will respond with the intersection of the most preferred yet supported protocol of the client requestor for the response payload -- the requestor can then use this knowledge to use its most preferred/supported compression on subsequent requests.

For the time being, the request line to avoid this negotiation the first implementation should just be:

EWP 0.1 snappy snappy

OR

EWP 0.1 none none

to avoid all negotiation issues for now.

@jannikluhn

This comment has been minimized.

Copy link
Contributor

commented Mar 20, 2019

I think this might be a good time to split up this spec

I support this. Maybe we can even abstract away the serialization format from the application level protocol in order to isolate another heavily discussed component.

@hwwhww hwwhww added the RFC label Mar 21, 2019


### Alternative for Non-`libp2p` Clients

Since some clients are waiting for `libp2p` implementations in their respective languages. As such, they MAY listen for raw TCP messages on port `9000`. To distinguish RPC messages from other messages on that port, a byte prefix of `ETH` (`0x455448`) MUST be prepended to all messages. This option will be removed once `libp2p` is ready in all supported languages.

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 21, 2019

Contributor

I think we should allow using a separate port. It's entirely possible a client will allow both communication modes.

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 21, 2019

Contributor

Also, this paragraph should go to the top since it relates to the envelope and port 9000.


```
(
sha: bytes32

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 21, 2019

Contributor

I didn't ask when this got drafted, sorry. What is sha here?

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 25, 2019

Author Contributor

The commit hash of the node.

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 27, 2019

Contributor

What is that? Best root?


## Encoding Nibble Values

- `0x1`: SSZ

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 21, 2019

Contributor

Permission to add 0x2 for BSON please?

This comment has been minimized.

Copy link
@jannikluhn

jannikluhn Mar 23, 2019

Contributor

I believe it would be best if we could agree on a single encoding format to maximize compatibility and minimize implementation overhead.

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 25, 2019

Author Contributor

Would prefer not, for the same reasons articulated on our call - we should agree together on a single encoding scheme.


## Compression Nibble Values

- `0x0`: no compression

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 21, 2019

Contributor

Permission to add 0x1 for snappy compression please?

This comment has been minimized.

Copy link
@mslipper

mslipper Mar 25, 2019

Author Contributor

See above - I don't want to commit teams to an implementation without getting consensus.

)
```

Requests a list of block roots and slots from the peer. The `count` parameter MUST be less than or equal to `32768`. The slots MUST be returned in ascending slot order.

This comment has been minimized.

Copy link
@paulhauner

paulhauner Mar 22, 2019

Contributor

I have some questions regarding the response:

  • Can you skip a slot (e.g., 1, 3, 4)?
  • Can you return less/more than the requested roots?
  • Can you start at higher slot?
  • Can you return a slot outside of the start_slot + count bounds?

Maybe "The slots MUST be returned in ascending slot order." is succinct already? If this is the case we could add something like "The only requirements for roots are ...".

P.S. this is my first comment, thanks for making this doc!

@FrankSzendzielarz

This comment has been minimized.

Copy link
Member

commented Mar 23, 2019

(
id: uint64
response_code: uint16
result: bytes

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 26, 2019

Contributor

Can we just use one structure? You can tell if it's a response by comparing request IDs.


### Beacon Block Bodies

**Method ID:** `12`

This comment has been minimized.

Copy link
@atoulme

atoulme Mar 26, 2019

Contributor

Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.

@raulk

This comment has been minimized.

Copy link
Contributor

commented Mar 26, 2019

@zah

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

You don't need to wait for multistream 2.0. When you open a new stream, multistream 1.0 is in charge of associating that stream with a protocol. To be clear:

  1. When opening a stream, you tell libp2p which protocol P you want it to be associated with via the API.
  2. libp2p opens a stream with the underlying multiplexer (yamux, mplex, spdystream, quic, etc.). At this point, the stream is just a number.
  3. multistream 1.0 kicks in and selects protocol P.
  4. libp2p returns the stream, contextualised for that protocol P.
@arnetheduck

This comment has been minimized.

Copy link
Contributor

commented Mar 26, 2019

Generally, I see we've gained a lot of different uintXX sizes, while eth2 has opted to go with uint64 exclusively (almost?) - network/chain/method id etc. The motivation seems to be to save a few bytes here and there, but they look like one-offs in requests and the like, nothing that will fundamentally make an actual difference.

Considering sizes of payloads and taking into consideration the departure from rest of the eth2 spec, how do we feel about sticking with uint64 unless there's strong (aka motivated by compelling numbers) reason not to?

chain_id: uint64
latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32

This comment has been minimized.

Copy link
@arnetheduck

arnetheduck Mar 28, 2019

Contributor

Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.

The details of this are TBD - but probably we're looking at something like attestations: [Attestation] where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).

@djrtwo

djrtwo approved these changes Mar 28, 2019

Copy link
Contributor

left a comment

I'm merging this. We can take conversation to issues tagged network and to subsequent PRs

@djrtwo djrtwo merged commit bae727a into ethereum:dev Mar 28, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.