Join GitHub today
Block size can eclipse p2p max message size. #1655
The max block size is set to 375 MiB.
The max message size is 2 MiB.
It is possible for a block that is accepted via consensus to be rejected via the p2p code.
The max block size is never actually checked against an upper bound when updating witness properties
Furthermore, the block size is only checked against
Is 375 MiB a reasonable max block size?
What needs to change ASAP.
In a hard fork we need to enforce the max block size check in consensus.
This is a problem and needs to be fixed.
I don't know that we have to rush an emergency hardfork out for this, see my reasoning below.
Given that the current p2p protocol transmits an entire block in a single message, the p2p message size should be greater than or equal to the max block size, plus whatever wrapping goes around the block to form the p2p message.
Instead of increasing the p2p max message size to the max block size, let's decrease the max block size to the max p2p message size, because the p2p message size is a reasonably sized value, and the max block size is unreasonably large.
375 MB is a totally unreasonable size for a block due to latency concerns. The time to transmit a block should be much smaller than the blockchain's minimum responsiveness. Given some rough estimates -- the blockchain is expected to be responsive on a timescale of 3 seconds, we 100 mbps available to a single p2p connection (a very generous bandwidth estimate), and "much smaller" means 10% -- we see that p2p max message size should be on the order of 0.3 x 100 / 8 = 3.75 MB. A setting of 2MB is in the right ballpark.
From the standpoint of rate of blockchain growth, a block size of 2MB with 3 second block time corresponds to a blockchain growth rate of approximately 21 TB / year. I think this is a reasonable upper bound on the blockchain's growth rate given the current state of computer storage and processing speed. If anything, this upper bound should be lowered. Under no circumstances should it be raised by factor of 200.
If you are running a blockchain with wildly different operating conditions from today's Steem network -- for example, with 200x the network bandwidth / processing speed / storage capacity, or with a block interval that is 200 times longer -- then a 375 MB block size might be reasonable.
This issue does not justify an emergency hardfork. The size mismatch is only an issue if witnesses are completely irresponsible and a majority of them choose to vote for an unreasonably huge block size that will break the blockchain.
A rogue witness could produce a block that is accepted by consensus but too large to propagate over the p2p network. But this is a self-limiting problem -- since the block can't be transmitted over the p2p network, and there's no means to allow direct insertion of blocks bypassing the size cap enforced by p2p message limits, the block will be stuck on the rogue witness's machine because it literally has no way to travel to other nodes.
I am good with reducing the accepted max block size in a soft fork and enforcing checks properly.
At a later time I want to bump up the p2p max message size with a decrease of the max block size allowed via consensus.
Consider that the max block size is the max allowed burst block size. If the blockchain regularly hit the max block size it would devolve into a full reserve system. And certainly the block size would decrease. We currently target 25% of the max block size as the desired growth rate. Perhaps the max block size needs to increase significantly in the future and the target needs to decrease? But that is a different issue entirely.