Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement JSON-RPC batching #27

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

stevenroose
Copy link
Collaborator

I'm gonna do some tests with this on an actual node tomorrow.

@stevenroose stevenroose requested a review from dpc March 21, 2019 22:35
@stevenroose
Copy link
Collaborator Author

Could have avoided the Arc<jsonrpc::client::Client> in Batch by having Batch::execute also take a client. Might be nicer.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a skim on it, was thinking maybe having a higher function with match pattern for each rpc-call, where if batched return enum::Batch(BatchContent), if single return enum::SingleCall(Type) and then for enumSingleCall call directly send before from returning from higher function. Higher function with a second enum as return type..
Ok didn't think as much as you but dup API seems hard to maintain and IMO you're not gonna be able to use hygienic macro there

@stevenroose
Copy link
Collaborator Author

I don't really understand what you mean there.

There is one way I can see to keep a single API and it might be similar to what you're saying:
Every call returns an intermediate object that you can either call .get() on to get the result or .batch(&batch) to add it to a batch. For the normal use-case that would require a .get() on every call, though and it would also break the current API.

Personally I don't think the double API is a very big deal. We just need to remind ourselves that whenever a change is made to one side, we need the same change on the other side.

@ariard
Copy link

ariard commented Mar 29, 2019

Yes returning an intermediate object was kinda I wanted to mean !

(sorry didn't get your answer notif mail)

@stevenroose
Copy link
Collaborator Author

I might give the object thing without double API a shot. Will break the existing API, though.

@dpc
Copy link
Collaborator

dpc commented Apr 11, 2019

The batching API would be much easier to express in terms of async/await. Both non-batching and batching versions would just return futures, which would either resolve immediately, or when execute_batch was executed. This way we wouldn't have to have double-API.

I also don't mind the duplication of the API. And the unification with get() on normal calls will hurt usability for normal usage, while I personally don't find batching that much useful feature at all...

Do we really need batching API anyway? :D . Anyone using it, that can explain when is it that much of an improvement over just multi-threading?

With async/await and underlying HTTP doing stuff like pipelining (hmmm... does bitcoincore support it?) the latency would get taken care of even on a single connection.

Copy link
Collaborator

@dpc dpc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I should do something. Please re-request review if I so.

@stevenroose
Copy link
Collaborator Author

No, I paused this for a bit since I was changing the regular interface a lot and I was unsure about how to do this. I'm still thinking of unifying the interfaces somehow.

@justinmoon
Copy link
Contributor

justinmoon commented Aug 1, 2020

What's the status here?

@stevenroose
Copy link
Collaborator Author

It seems that this crate is going to need a bit of an overhaul when we want to support async requests. So I'm thinking that we should think about a design in which each "call" can be done either sync, batched or async. Either in some way where a request method returns some object that you can then call .sync(), .batch(my_batch) or .async() on or something or either some other way to have 3 interfaces with the same methods but different return values and find a way to do that without too much code duplication.

This PR basically does the code duplication approach but doing that a third time would be quite tedious. Even though it'd be possible if it gives us the most ergonomic API.

@dpc
Copy link
Collaborator

dpc commented Sep 10, 2020

I'm still curious to hear from anyone using batching, and what benefits (of what magnitude) does it yield.

@craigraw
Copy link

craigraw commented Feb 10, 2021

I'm still curious to hear from anyone using batching, and what benefits (of what magnitude) does it yield.

Speaking as the developer of Sparrow Wallet (which now uses this project via bwt), I have some experience with JSON-RPC batching over the Electrum server protocol. ElectrumX implements JSON-RPC batching, but can run without it, so I did some tests on a small wallet this morning:

With batching enabled, I get an average of 400ms (+/- 10%) over a number of full wallet refreshes.
With batching disabled, I get a much more variable result:

Took: 1022ms
Took: 990ms
Took: 3309ms
Took: 1269ms
Took: 1002ms
Took: 1006ms
Took: 2157ms
Took: 980ms
Took: 946ms

So I think the advantage is not only much better performance, but also much more consistent performance.

@dpc
Copy link
Collaborator

dpc commented Feb 12, 2021

@craigraw this is comparison of batching vs no batching, and not batching vs thread-pool, right? Do you have any links to the code to look at exactly what exactly was being exercised?

@craigraw
Copy link

Correct- batching vs no batching.

I'm not sure if it's helpful, but here are the two implementations of the Electrum RPC client code for Sparrow:
Non-batching: SimpleElectrumServerRpc.java
Batching: BatchedElectrumServerRpc.java

However, it's easiest to see in action with debug logging showing the flurry of very similar requests without batching, versus the single request with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants