Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpc: Further improvements #582

Closed
4 of 5 tasks
thanethomson opened this issue Sep 24, 2020 · 6 comments
Closed
4 of 5 tasks

rpc: Further improvements #582

thanethomson opened this issue Sep 24, 2020 · 6 comments
Labels
Milestone

Comments

@thanethomson
Copy link
Member

thanethomson commented Sep 24, 2020

Building on #568, I'd like to recommend we implement the following improvements over time (this is a placeholder issue to keep track of what still needs improvement, from what I can currently see):

  • A structured query interface, as per the ADR.
  • Do an audit of the types employed by the event and query mechanisms to see if they need updating as per our recent domain types work.
  • Implement support for more event types than just NewBlock and Tx.
  • Implement the full Client trait for the WebSocketClient.
  • Resolve the question around async vs sync I/O and, if sync is the way forward with async wrappers, submit a new ADR to outline how to go about migrating from async to sync (see De-asyncify RPC client & requester? #318).
@tarcieri
Copy link
Contributor

Resolve the question around async vs sync I/O and, if sync is the way forward with async wrappers

I thought this was only for ABCI?

Much of the RPC client is factored in such a way that supporting both sync and async shouldn't be that difficult, so I think it'd be weird to write an async wrapper for a sync client.

As it were, the KMS transaction signer makes pretty extensive use of async RPC already...

@thanethomson
Copy link
Member Author

thanethomson commented Sep 24, 2020

I thought this was only for ABCI?

My understanding, given conversations with @ebuchman recently and issues like #318, is that we need to eventually settle on a broad approach to I/O that facilitates both async and sync access (beyond just ABCI and the RPC). If we eventually want to facilitate both, it seems like the easiest way right now to do so is to write mostly synchronous code and wrap it in async code. Not sure how that plays out with the use of event-driven networking though.

The tickbox in this issue, however, relates purely to discussing and making architectural decisions along these lines.

@tarcieri
Copy link
Contributor

tarcieri commented Sep 24, 2020

I left more or less the same comment about async RPC on #318. See this one in particular:

#318 (comment)

If we eventually want to facilitate both, it seems like the easiest way right now to do so is to write mostly synchronous code and wrap it in async code. Not sure how that plays out with the use of event-driven networking though.

That makes sense for ABCI, where you can expose a synchronous API for running 3 threads, and then wrap each thread in channels. That's particularly advantageous given there are a number of different channel types to choose from depending on needs, and multiple ways to wrap up those channel-based APIs (tokio, tower, async-std).

I don't think doing something like that for the RPC client makes sense. All of the high-quality HTTP clients (namely hyper) are async these days. reqwest goes the other way around, and allows you to make sync or async requests by spawning an async runtime for making synchronous requests then exposing a synchronous API.

If you want to support both, reqwest gives you an async client with a sync facade mostly for "free" (but still requires an async runtime). Otherwise I think it'd be easier/better to just maintain both a natively async and sync client independently side-by-side, especially since it's not really that much code.

@thanethomson
Copy link
Member Author

All of the high-quality HTTP clients (namely hyper) are async these days.

Totally agreed here.

Otherwise I think it'd be easier/better to just maintain both a natively async and sync client independently side-by-side, especially since it's not really that much code.

The WebSocket client is significantly more code, is intended to grow in functionality, and is deeply async. It may be useful in future sometime to do a spike where we build a non-async WebSocket client (with tungstenite and crossbeam, for example) to compare it with the async version.

The core of what we're concerned about here is probably maintaining clear separation between "business logic" and the networking stack. It'd be great if we could somehow build common business logic throughout tendermint.rs that makes no prescriptions on whether to use async or not, and that can be plugged into async or non-async transport layers (which will need to be implemented separately, as you say).

@tarcieri
Copy link
Contributor

tarcieri commented Sep 24, 2020

The core of what we're concerned about here is probably maintaining clear separation between "business logic" and the networking stack.

For the RPC client, much of that is already provided by rpc::Request and rpc::Response. That's what allows the current async hyper-based transport to be <100 LOC.

@thanethomson
Copy link
Member Author

I think the only thing left here on this issue at this point in time is to implement support for additional event types other than NewBlock and Tx. I'd say it's safe to close this issue, and as people have the need for support for additional event types we can open separate issues for those types.

Additionally, we're keeping the RPC fully async at present and the WebSocketClient is now full-featured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants