Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RPC] Just an attemt to save some bandwidth #9893

Closed
wants to merge 2 commits into from

Conversation

CryptoManiac
Copy link

Simple gzip compression can be very useful when you're dealing with a lot of getblocktemplate or getrawmempool requests. It decreases GBT latency and allows you to save a lot of bandwidth.

Just for example, I have few instances of bitcoind which are being used as template sources for eloipool. Each getblocktemplate response has a size of ~2MB. Being gzipped it shrinks down to ~800KB, which is a fairly reasonable size. :)

@jameshilliard
Copy link
Contributor

@CryptoManiac why not just put a reverse proxy like nginx in front of the RPC interface?

@luke-jr
Copy link
Member

luke-jr commented Mar 1, 2017

You really shouldn't use RPC over a network...

@CryptoManiac
Copy link
Author

@jameshilliard Well, this also may be a viable solution. Unfortunately, some people are limited to Windows operating system. To handle connections, the NT version of nginx is currently using only select(), so it's very slow. :(

@CryptoManiac
Copy link
Author

CryptoManiac commented Mar 1, 2017

@luke-jr Why not? If you have a walletless daemon, which listening at the private interface, then I'd say that nothing bad could happen.

However, I understand the main idea. Closing for now.

@laanwj
Copy link
Member

laanwj commented Mar 3, 2017

Just for example, I have few instances of bitcoind which are being used as template sources for eloipool. Each getblocktemplate response has a size of ~2MB. Being gzipped it shrinks down to ~800KB, which is a fairly reasonable size. :)

This does increase the latency by bit, though, so on faster networks wouldn't want to use this.

In any case I agree that this belongs externally. RPC is in the first place for local communication, for doing anything like compression, encryption put it behind a tunnel or nginx instance.

@CryptoManiac
Copy link
Author

CryptoManiac commented Mar 3, 2017

@laanwj It depends on your TCP/IP stack settings. In FreeBSD, for example, sometimes it's recommended to set net.inet.tcp.recvspace parameter to 8k. With such receive buffer size you can get a significant improvement even for a local instance of bitcoind, speed up looks like a linear function of compression ratio.

@jonasschnelli
Copy link
Contributor

If you want compress data, why not reverse-proxy over apache and enable mod_deflate?

@CryptoManiac
Copy link
Author

CryptoManiac commented Mar 3, 2017

@jonasschnelli Because sometimes it doesn't help. Nginx needs to get the data from the upstream, so it can be affected by tcp/ip settings. Unfortunately, bitcoin doesn't support RPC over unix domain sockets for now.

@laanwj
Copy link
Member

laanwj commented Mar 3, 2017

Unfortunately, bitcoin doesn't support RPC over unix domain sockets for now.

Now that would be a welcome addition. But apparently it needs changes to evhttp. There's a similar ticket open for Transmission, which also uses evhttp.

@laanwj
Copy link
Member

laanwj commented Mar 3, 2017

I just realized that it can be implemented with the current evhttp just fine - do the listening manually, set the socket to non-blocking, then pass the acceptfd to evhttp_accept_socket_with_handle, similarly as I did here for cloudabi: https://github.com/laanwj/bitcoin/blob/2017_03_cabi_fs/src/httpserver.cpp#L360.

@laanwj
Copy link
Member

laanwj commented Mar 4, 2017

@CryptoManiac See #9919

@CryptoManiac
Copy link
Author

@laanwj Nice one, thanks.

@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants