-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream standards #31
Comments
Clients will likely have to support filtering/routing if they follow either of these two approaches. For instance, eth_newFIlter provides that ability and new API should preserve that I guess. Streaming is more or less turns into a pub/sub pattern in our case. There are basically not many types of objects that users would be able to subscribe to:
IMO, handling them all by a single endpoint as suggested by @mcdee is more convenient. There could be an issue with muxing the data coming from different streams in parallel in the user's code. Having a single stream would get rid of this potential issue. |
I would have thought it'd be a lot easier on the server side to have clients subscribing to specific services, that way you can gather metrics etc for what people are / aren't using, and only give events on specific topics to those that need them, not broadcast to every client. |
Streaming of data could be considered easier for clients, but not so much for servers. The problem with specific streams for each item is that there would be a lot of them, and they each require some logic. Taking an example where 10 deposits have been made and the user wants to be notified when they are added to state, there would need to be:
All possible, of course, but a fair amount of effort, and duplicated nearly-but-not-quite-the-same code (logic for a validator filter isn't the same as a block filter, for example). If instead we had an streaming endpoint for server events the user would need to listen to the endpoint for "head updated" messages, check to see if the head is the start of a new epoch, and pull its validator information from the REST endpoint (either individually or en masse, depending on where we end up with the validator info endpoint). This requires less code for the server (only one streaming endpoint) and potentially less sophistication from the client (a single stream to which to listen, less state needed in case of a restart, no need to update filters on the fly, etc.) A streaming endpoint also stops us using some of the features of the HTTP infrastructure that would otherwise be very useful, such as caching. If 1,000 clients were all streaming the latest block then the server would need to send 1,000 blocks each time a new block were received. With the streaming method all that would be sent would be the block root, reducing load. If the server had a cache in front of it, it would serve the first client request for the block and the other 999 would be served by the cache. I started off thinking that streaming everything as a first-class citizen was important, but I'm more inclined now to believe that a model that streams only the minimum required (indicators such as "block received", transactions such as "voluntary exit" and "attestation", along with relevant data), with a single filter to only receive the events that the client wants to hear about, will provide both better functionality for the client and less work for the server (and devs). |
It is desirable in many contexts to support streaming as a first class citizen in this API. Streaming/event type functionality is already seen to be useful in eth1 and is also currently being used by consumers of both the prysm API and lighthouse websockets.
[Actual protocol aside] The two methods discussed for supporting this in the API are specific stream endpoints or a single event stream.
/stream
versions of a resource endpoint -- e.g./resources/stream
EVENT new_block 0xfffffff
and then calls/beacon/chain/block/0xffffff
to retrieve the alerted about block.The text was updated successfully, but these errors were encountered: