Skip to content
This repository has been archived by the owner on Nov 2, 2018. It is now read-only.

Size Requirements and Rejecting Blocks #54

Closed
DavidVorick opened this issue Nov 15, 2014 · 4 comments
Closed

Size Requirements and Rejecting Blocks #54

DavidVorick opened this issue Nov 15, 2014 · 4 comments
Milestone

Comments

@DavidVorick
Copy link
Member

We should set the size limit at 1mb per block and 16kb of arbitrary data per block.

Functions need to be written that can guage each of these numbers and reject any blocks that are in violation.

@DavidVorick DavidVorick added this to the Open Beta milestone Dec 12, 2014
@DavidVorick
Copy link
Member Author

Any chance of this being done by Friday?

@lukechampine
Copy link
Member

This would be fairly trivial to implement. The current maxMsgLen is 16MB, but could be lowered to 1MB. I don't think there any other messages that would exceed that size. Alternatively, we could modify AcceptBlock to reject large blocks -- but you'd still need to download the entire block (up to 16MB currently) before rejecting it.

I would suggest lowering maxMsgLen and adding the arbitrary data check to AcceptBlock. The one gap I see here is that it doesn't prevent you from accepting blocks larger than 1MB.

@DavidVorick
Copy link
Member Author

Yeah I should add something to the consensus package that's aware of maximum sizes. Especially b/c eventually we'll probably be doing some form of compression on our messages. Even 10% size reduction is worthwhile. But the consensus stuff needs to be in perfect agreement about what's acceptable in a block, which means no compression (or at least very strict compression, which is annoying)

@lukechampine
Copy link
Member

yeah, that definitely adds another wrinkle. It'd be easy to make a "1MB" block message that's actually a massive zip bomb.

However, my anti-compression stance has softened a bit. It would actually be really easy to do in Go, thanks to the io.Reader and io.Writer interfaces:

_, err := encoding.WriteObject(conn, arg)
err := encoding.ReadObject(conn, resp, maxMsgLen)
// becomes:
_, err := encoding.WriteObject(gzip.NewWriter(conn), arg)
err := encoding.ReadObject(gzip.NewReader(conn), resp, maxMsgLen)

NewWriter wraps conn, such that calling Write will write compressed data to conn. Likewise, calling Read on NewReader(conn) will read decompressed data.

This probably isn't as easy in other languages though, especially C.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants