Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Make better use of multi-core servers #1079
Additional Context (optional)
I think a better approach would be to extract the crypto calculations from the database thread out into the network layers, i. e. handle the signature->pubkey conversion in the P2P and API layers. Keep in mind that a witness has will apply a given transaction three times when signing a block:
The abovementioned patch doesn't help with steps 1 + 2.
CORE TEAM TASK LIST
Someone has invaded my head. I spent a good chunk of last night reading and pondering some of the same ideas. The big question that kept coming up as I thought about implementations was "would this be faster, or slower?"
Web servers have SSL offloading, which can be dedicated hardware that handles the SSL encryption/decryption, so the real server can serve pages. But we need authentication on both sides ("client side authentication" in web-speak).
An intermediate step is giving peers the option to talk over an encrypted, verified channel. Part of the handshake verifies public keys and negotiates stream ciphers. While negotiation is painful up front, subsequent communication is less-so.
We could centralize (and perhaps optimize) signature verification by moving the incoming data through the verification step on the way in, and marking the internal representation of that data as "signed and verified by public key X". Perhaps there would be no more signature verification needed.
Warning... Going off topic here...
What if we want to only connect to a white list of servers? We can do that if we know (with a good amount of confidence) who is on the other side.
Another optimization is specialized protocols. Incoming blocks can quickly be routed somewhere different than incoming heartbeats, or connection requests, or ....
Back on topic (somewhat)...
There's a lot of possible optimizations here. But I keep bubbling back up to my original question. "Would this be faster or slower?" The current p2p code has some metrics. I think we're going to have to give that a hard look, gather stats (especially what kind of process is currently slowing our throughput), and spend some time in the laboratory.
@jmjatlanta there's a difference between client and/or P2P connections and transaction signature verification. Similar to sending PGP-encrypted email through an TLS-protected SMTP connection (that's routed through a VPN tunnel if you want). :-) They are different communication layers where encryption / authentication plays different roles.
P2P connections are already encrypted, but there is no authentication happening yet. Interesting topic, but out of scope here.
This! ("centralize" in the sense of software code, it could be done "decentralized" in the sense of multi-core and perhaps as a future step in the sense of multi-server).
referenced this issue
Jun 23, 2018
@bangzi1001 It would certainly help.
I would like to set up some kind of lab. I've got a fairly slow machine in NY, and a few around here. I'm hoping to develop some kind of testing framework that closely mimics the block producing process, and use it and a variety of machines to gather some metrics. We may already have such a framework, or at least the beginnings of one.