-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement JSON-RPC batching #27
base: master
Are you sure you want to change the base?
Conversation
Could have avoided the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a skim on it, was thinking maybe having a higher function with match pattern for each rpc-call, where if batched return enum::Batch(BatchContent), if single return enum::SingleCall(Type) and then for enumSingleCall call directly send before from returning from higher function. Higher function with a second enum as return type..
Ok didn't think as much as you but dup API seems hard to maintain and IMO you're not gonna be able to use hygienic macro there
I don't really understand what you mean there. There is one way I can see to keep a single API and it might be similar to what you're saying: Personally I don't think the double API is a very big deal. We just need to remind ourselves that whenever a change is made to one side, we need the same change on the other side. |
Yes returning an intermediate object was kinda I wanted to mean ! (sorry didn't get your answer notif mail) |
I might give the object thing without double API a shot. Will break the existing API, though. |
The batching API would be much easier to express in terms of async/await. Both non-batching and batching versions would just return futures, which would either resolve immediately, or when I also don't mind the duplication of the API. And the unification with Do we really need batching API anyway? :D . Anyone using it, that can explain when is it that much of an improvement over just multi-threading? With async/await and underlying HTTP doing stuff like pipelining (hmmm... does bitcoincore support it?) the latency would get taken care of even on a single connection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I should do something. Please re-request review if I so.
No, I paused this for a bit since I was changing the regular interface a lot and I was unsure about how to do this. I'm still thinking of unifying the interfaces somehow. |
What's the status here? |
It seems that this crate is going to need a bit of an overhaul when we want to support async requests. So I'm thinking that we should think about a design in which each "call" can be done either sync, batched or async. Either in some way where a request method returns some object that you can then call This PR basically does the code duplication approach but doing that a third time would be quite tedious. Even though it'd be possible if it gives us the most ergonomic API. |
I'm still curious to hear from anyone using batching, and what benefits (of what magnitude) does it yield. |
Speaking as the developer of Sparrow Wallet (which now uses this project via bwt), I have some experience with JSON-RPC batching over the Electrum server protocol. ElectrumX implements JSON-RPC batching, but can run without it, so I did some tests on a small wallet this morning: With batching enabled, I get an average of 400ms (+/- 10%) over a number of full wallet refreshes.
So I think the advantage is not only much better performance, but also much more consistent performance. |
@craigraw this is comparison of batching vs no batching, and not batching vs thread-pool, right? Do you have any links to the code to look at exactly what exactly was being exercised? |
Correct- batching vs no batching. I'm not sure if it's helpful, but here are the two implementations of the Electrum RPC client code for Sparrow: However, it's easiest to see in action with debug logging showing the flurry of very similar requests without batching, versus the single request with. |
I'm gonna do some tests with this on an actual node tomorrow.