Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch RPC calls to bitcoind #5

Open
shesek opened this issue May 18, 2020 · 3 comments
Open

Batch RPC calls to bitcoind #5

shesek opened this issue May 18, 2020 · 3 comments
Labels
enhancement New feature or request

Comments

@shesek
Copy link
Collaborator

shesek commented May 18, 2020

Not yet implemented in rust-bitcoincore-rpc: rust-bitcoin/rust-bitcoincore-rpc#27

@shesek shesek added the enhancement New feature or request label May 18, 2020
@shesek shesek added this to To Do in Bitcoin Wallet Tracker Jun 21, 2020
@shesek shesek moved this from To Do to Next up in Bitcoin Wallet Tracker Sep 26, 2020
@dpc
Copy link

dpc commented Feb 10, 2021

From my experience batching Bitcoin request at json-rpc layer is totally not worth it. Just use a threadpool.

@shesek
Copy link
Collaborator Author

shesek commented Feb 10, 2021

Thanks for the tip!

Did you try with high-latency connections, say over Tor?

@dpc
Copy link

dpc commented Feb 10, 2021

Did you try with high-latency connections, say over Tor?

No. The use cases I've worked with were all reasonably low latencies.

Just running this in my head, I don't see how batching would be better there either.

The optimization problem here is basically saturating IO throughput both on the network on disk of the fullnode (whichever becomes bottleneck first). Naive implementation blocking on each rpc call under-utilizes resources, wasting the time of the whole round-trip on each call.

Batching gets rid of most round-trips, but still suffers from the round-trip. Just once in N times instead of every time. A thread-pool can keep both the link/fullnode IO saturated 100% time. The amount of work (like serialization&deserialization) is roughly the same - with threadpool there's just more HTTP & JSONRPC envolope parsing, but these are minuscule.

On a long latency link like Tor, one would need more threads in the threadpool to stature the IO ... assuming high throughput. But since both the throughput and latency are (I assume) going to be low, then in practice a threadpool of 2 is going to saturate the connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

2 participants