Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block size can eclipse p2p max message size. #1655

Closed
mvandeberg opened this Issue Oct 13, 2017 · 6 comments

Comments

Projects
4 participants
@mvandeberg
Copy link
Contributor

mvandeberg commented Oct 13, 2017

In protocol https://github.com/steemit/steem/blob/master/libraries/protocol/include/steemit/protocol/config.hpp#L228

The max block size is set to 375 MiB.

In the p2p code https://github.com/steemit/steem/blob/master/libraries/net/include/graphene/net/config.hpp#L39

The max message size is 2 MiB.

It is possible for a block that is accepted via consensus to be rejected via the p2p code.

The max block size is never actually checked against an upper bound when updating witness properties

https://github.com/steemit/steem/blob/master/libraries/protocol/include/steemit/protocol/steem_operations.hpp#L394-L400
https://github.com/steemit/steem/blob/master/libraries/chain/steem_evaluator.cpp#L64-L108

Furthermore, the block size is only checked against STEEM_MAX_BLOCK_SIZE if you are producing.
https://github.com/steemit/steem/blob/master/libraries/chain/database.cpp#L828

Is 375 MiB a reasonable max block size?
Should we increase the p2p max message size to the max block size?

What needs to change ASAP.

  • Increase p2p max message size to 375 MiB.
  • Soft fork to ensure witness votes max block size is less than 32 MiB.
  • When generating a block, check block size against witness voted block size.

In a hard fork we need to enforce the max block size check in consensus.

@theoreticalbts

This comment has been minimized.

Copy link
Contributor

theoreticalbts commented Oct 13, 2017

the block size is only checked against STEEM_MAX_BLOCK_SIZE if you are producing.

This is a problem and needs to be fixed.

needs to change ASAP

I don't know that we have to rush an emergency hardfork out for this, see my reasoning below.

Should we increase the p2p max message size to the max block size?

Given that the current p2p protocol transmits an entire block in a single message, the p2p message size should be greater than or equal to the max block size, plus whatever wrapping goes around the block to form the p2p message.

Instead of increasing the p2p max message size to the max block size, let's decrease the max block size to the max p2p message size, because the p2p message size is a reasonably sized value, and the max block size is unreasonably large.

Is 375 MiB a reasonable max block size?

Certainly not.

375 MB is a totally unreasonable size for a block due to latency concerns. The time to transmit a block should be much smaller than the blockchain's minimum responsiveness. Given some rough estimates -- the blockchain is expected to be responsive on a timescale of 3 seconds, we 100 mbps available to a single p2p connection (a very generous bandwidth estimate), and "much smaller" means 10% -- we see that p2p max message size should be on the order of 0.3 x 100 / 8 = 3.75 MB. A setting of 2MB is in the right ballpark.

From the standpoint of rate of blockchain growth, a block size of 2MB with 3 second block time corresponds to a blockchain growth rate of approximately 21 TB / year. I think this is a reasonable upper bound on the blockchain's growth rate given the current state of computer storage and processing speed. If anything, this upper bound should be lowered. Under no circumstances should it be raised by factor of 200.

If you are running a blockchain with wildly different operating conditions from today's Steem network -- for example, with 200x the network bandwidth / processing speed / storage capacity, or with a block interval that is 200 times longer -- then a 375 MB block size might be reasonable.

needs to change ASAP.

This issue does not justify an emergency hardfork. The size mismatch is only an issue if witnesses are completely irresponsible and a majority of them choose to vote for an unreasonably huge block size that will break the blockchain.

A rogue witness could produce a block that is accepted by consensus but too large to propagate over the p2p network. But this is a self-limiting problem -- since the block can't be transmitted over the p2p network, and there's no means to allow direct insertion of blocks bypassing the size cap enforced by p2p message limits, the block will be stuck on the rogue witness's machine because it literally has no way to travel to other nodes.

@theoreticalbts

This comment has been minimized.

Copy link
Contributor

theoreticalbts commented Oct 13, 2017

increase the p2p max message size

I would be very worried about a p2p network split if we did this without a very careful planning of the rollout.

@mvandeberg

This comment has been minimized.

Copy link
Contributor Author

mvandeberg commented Oct 13, 2017

I intended to get an emergency release with a soft fork and do the hardfork changes in HF 20. I apologize if that was not clear.

@mvandeberg

This comment has been minimized.

Copy link
Contributor Author

mvandeberg commented Oct 13, 2017

I am good with reducing the accepted max block size in a soft fork and enforcing checks properly.

At a later time I want to bump up the p2p max message size with a decrease of the max block size allowed via consensus.

Consider that the max block size is the max allowed burst block size. If the blockchain regularly hit the max block size it would devolve into a full reserve system. And certainly the block size would decrease. We currently target 25% of the max block size as the desired growth rate. Perhaps the max block size needs to increase significantly in the future and the target needs to decrease? But that is a different issue entirely.

mvandeberg added a commit that referenced this issue Oct 17, 2017

mvandeberg added a commit that referenced this issue Oct 17, 2017

@mvandeberg

This comment has been minimized.

Copy link
Contributor Author

mvandeberg commented Oct 17, 2017

Part of this issue is handled via soft fork. The rest will be handled in HF 20.

@mvandeberg mvandeberg added the HARDFORK label Oct 17, 2017

@mvandeberg mvandeberg added this to To Do in Steem 0.20.0 Oct 17, 2017

mvandeberg added a commit that referenced this issue Oct 17, 2017

mvandeberg added a commit that referenced this issue Oct 23, 2017

@mvandeberg

This comment has been minimized.

Copy link
Contributor Author

mvandeberg commented Oct 23, 2017

On master and develop

@mvandeberg mvandeberg closed this Oct 23, 2017

@youkaicountry youkaicountry moved this from To Do to Done in Steem 0.20.0 May 10, 2018

@mac128k mac128k added this to Backlog (not prioritized) in Hardfork 20 (HF20/Velocity) May 11, 2018

@mac128k mac128k moved this from Backlog (not prioritized) to Done in Hardfork 20 (HF20/Velocity) May 11, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.