Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature guard the networking stuff #289

Closed
yihuang opened this issue Jun 1, 2020 · 18 comments · Fixed by #338
Closed

Feature guard the networking stuff #289

yihuang opened this issue Jun 1, 2020 · 18 comments · Fixed by #338
Labels
rpc structure High level repo-wide structural concerns

Comments

@yihuang
Copy link
Contributor

yihuang commented Jun 1, 2020

  • since we already implemented the networking stuff ourselves, so it would be good for us to cut some dependencies.
  • we can even try to make it no_std, after separating out networking features.
@ebuchman
Copy link
Member

ebuchman commented Jun 2, 2020

Are you just talking about the rpc client here (specifically the hyper dep?). I think we should consider refactoring the crate structure, so that rpc would be its own crate in this repo, separate from the tendermint crate which would contain the core data types, ie. see #7 (comment). Would that solve this?

@ebuchman
Copy link
Member

ebuchman commented Jun 2, 2020

Also does this mean you've implemented your own tendermint rpc client in rust? Can you provide a link?

@ebuchman ebuchman added the structure High level repo-wide structural concerns label Jun 3, 2020
@yihuang
Copy link
Contributor Author

yihuang commented Jun 3, 2020

Also does this mean you've implemented your own tendermint rpc client in rust? Can you provide a link?

We multiplex all rpc requests to the websocket endpoint, so we use async-tungstenite but not hyper.
current version.
A simpler but not merged yet version

@yihuang
Copy link
Contributor Author

yihuang commented Jun 3, 2020

Are you just talking about the rpc client here (specifically the hyper dep?). I think we should consider refactoring the crate structure, so that rpc would be its own crate in this repo, separate from the tendermint crate which would contain the core data types, ie. see #7 (comment). Would that solve this?

Actually we are using the rpc data types, but not the networking part.
Our issue at hand is need to bump hyper version to remove the dependency on the deprecated crate "net2".

@liamsi
Copy link
Member

liamsi commented Jun 3, 2020

we use async-tungstenite

This might be a bit offtopic but is there any public record on why you chose async-tungstenite over e.g. rust-websocket, or even a full blown jsonrpc client library like parity's (which uses websocket-rs if I'm not mistaken)?

We also use async-tungstenite for the (websocket part of) events subscription (#279) support but I'm wondering if it's worth out time caring about serilazation of the encapsulating jsonrpc messages even if this is all easy (or other details like the ping/pong control messages). We had for instance longish discussions on the jsonrpc ID field (e.g. here and here and here and offline).

If there are other reasons besides async-tungstenite being really lightweight, I'm curious to learn more about them.

@liamsi
Copy link
Member

liamsi commented Jun 3, 2020

Our issue at hand is need to bump hyper version to remove the dependency on the deprecated crate "net2".

Seems like hyper updated the to use socket2 instead of net2: hyperium/hyper#2206

I think we don't need to do anything here as this should automatically be updated. We only set to 0.13 here:

hyper = "0.13"

and the latest hyper release 0.13.6 includes switching to socket2 as far as I can see.

@liamsi
Copy link
Member

liamsi commented Jun 3, 2020

Oh I see, tokio still uses mio and this still uses net2: tokio-rs/tokio#1190

@yihuang
Copy link
Contributor Author

yihuang commented Jun 3, 2020

Oh I see, tokio still uses mio and this still uses net2: tokio-rs/tokio#1190

Ah, yeah, I guess this is the real issue, hyper is fine.

@yihuang
Copy link
Contributor Author

yihuang commented Jun 3, 2020

we use async-tungstenite

This might be a bit offtopic but is there any public record on why you chose async-tungstenite over e.g. rust-websocket, or even a full blown jsonrpc client library like parity's (which uses websocket-rs if I'm not mistaken)?

We also use async-tungstenite for the (websocket part of) events subscription (#279) support but I'm wondering if it's worth out time caring about serilazation of the encapsulating jsonrpc messages even if this is all easy (or other details like the ping/pong control messages). We had for instance longish discussions on the jsonrpc ID field (e.g. here and here and here and offline).

If there are other reasons besides async-tungstenite being really lightweight, I'm curious to learn more about them.

I guess lightweight and executor agnostic(work with both tokio and smol).
Regarding the id, we always convert it into string when sending the request, I think server is supposed to return the id as you passed in, so as long as you always pass in string id, you only need to parse string id.
I guess jsonrpc details are indeed not directly relate to tendermint itself, deserve to reside in other crate. And there are different runtime options here, like: http vs websocket, sync vs async, different async executors (tokio, async-std, smol). Not easy to make everyone happy.
But the request/response data types are tendermint related.

@liamsi
Copy link
Member

liamsi commented Jun 3, 2020

But the request/response data types are tendermint related.

Yeah, absolutely. I wish we could only focus on these and that's about it.

@romac romac changed the title feature guard the networking stuff Feature guard the networking stuff Jun 10, 2020
@romac romac added the rpc label Jun 10, 2020
@liamsi
Copy link
Member

liamsi commented Jun 17, 2020

@yihuang re-open if there is anything that is not sufficiently addressed by #338

@yihuang
Copy link
Contributor Author

yihuang commented Jun 18, 2020

@yihuang re-open if there is anything that is not sufficiently addressed by #338

It would be better if we can have the endpoint module in the tendermint crate ;D, they are more re-usable than the jsonrpc protocol/networking stuff.

@xla
Copy link
Contributor

xla commented Jun 18, 2020

@yihuang From the conversation it wasn't clear that you depend on any of the rpc code, could you explain which parts you use in what way so we get a better understanding of how to separate the code paths.

@xla xla mentioned this issue Jun 18, 2020
3 tasks
@yihuang
Copy link
Contributor Author

yihuang commented Jun 18, 2020

@yihuang From the conversation it wasn't clear that you depend on any of the rpc code, could you explain which parts you use in what way so we get a better understanding of how to separate the code paths.

We used the data types in rpc::endpoint::* and the json serializer/deserializer, but not the http client or jsonrpc request/response wrappers.

@liamsi
Copy link
Member

liamsi commented Jun 18, 2020

@yihuang we are still figuring out what the best overall structure should be for this repository and this feedback is really valuable to us in understanding how devs (plan to) use this library.

We should consider breaking out the "core" types (those that live in "endpoint" in rust) into a separate crate or even moving them back to the tendermint crate.

It sounds like developers should be able to use these types (go-version) without caring about the actual networking layer that provides an actual rpc endpoint:
https://github.com/tendermint/tendermint/tree/master/rpc/core/types

@xla
Copy link
Contributor

xla commented Jun 18, 2020

We should consider breaking out the "core" types (those that live in "endpoint" in rust) into a separate crate or even moving them back to the tendermint crate.

Alternatively we introduce fine-grained feature flags in the rpc crate which on-demand enable the client, event_listener and such. I'd assume the endpoints rightfully live in the rpc crate and it would cause more confusion and depedency hell spreading the types over both crates, unless I'm missing something.

@xla
Copy link
Contributor

xla commented Jun 18, 2020

Let's move this discussion #337 and #339 @liamsi @yihuang

@liamsi
Copy link
Member

liamsi commented Jun 18, 2020

Alternatively we introduce fine-grained feature flags in the rpc crate which on-demand enable the client, event_listener and such.

Sounds like the best path to move forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rpc structure High level repo-wide structural concerns
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants