-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(consensus): additional sanity checks for the size of proposed blocks #1408
Conversation
return ErrPartTooBig | ||
} | ||
// All parts except the last one should have the same constant size. | ||
if int64(part.Index) < part.Proof.Total-1 && len(part.Bytes) != int(BlockPartSizeBytes) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're still using part.Proof.Total
here, are we 100% sure this total will always be in sync with the total number of parts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a good point. But we don't have the Total
field in each Part
, we only have it on the PartSet
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point here is that wrong values for Proof.Total
and Proof.Index
should make the Merkle-tree verification to fail. But I am not sure whether we test this at this point. Notice that the Part
tests have nothing on the arrays used to fill the Part
and the Part.Proof
bytes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a possibility that Part
is used without populating part.Proof
? (I guess no, but double-checking).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should not be possible, as the next check (line 40) would fail. This check, however, is also a valid check: integer fields are >= 0, hashes have the right length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, status regarding this:
- we can trust
Proof.Index
, as change it causes the proof verification to fail - we can't trust
Proof.Total
, as it is irrelevant for the proof verification provided thatProof.Index < Proof.Total
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, now Proof.Total
should be correct, otherwise the Part
is not added. Lets check if something breaks due to this additional check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent!
consensus/state_test.go
Outdated
maxBlockParts := maxBytes / int64(types.BlockPartSizeBytes) | ||
if maxBytes > maxBlockParts*int64(types.BlockPartSizeBytes) { | ||
maxBlockParts += 1 | ||
} | ||
numBlockParts := int64(propBlockParts.Total()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maxBlockParts := maxBytes / int64(types.BlockPartSizeBytes) | |
if maxBytes > maxBlockParts*int64(types.BlockPartSizeBytes) { | |
maxBlockParts += 1 | |
} | |
numBlockParts := int64(propBlockParts.Total()) | |
maxBlockParts := (maxBytes-1) / int64(types.BlockPartSizeBytes) + 1 | |
numBlockParts := int64(propBlockParts.Total()) |
How about this instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer using a different method, even if it is less efficient/smart, when testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was convinced I had already approved this (sorry for delay)
Is this going to be backported? If so, to which version(s)? |
I would say all releases. Why? |
There are currently no backport labels on the PR. Let's get this over the finish line please. |
Sorry, my bad. @sergio-mena, can you confirm we are backporting this to all releases? |
As this comes from our work in a security advisory, I'd say we should backport it to all releases |
Hardens tests regarding the size of proposed blocks, namely:
Part
should be constant (== types.BlockPartSizeBytes
), except for the last part of aPartSet
(<= types.BlockPartSizeBytes
)Proposal
should not enclose aPartSet
enabling the building of aProposalBlock
with size larger than the configuredConsensusParams.Block.MaxBytes
. Notice that building aProposalBlock
larger than the allowed would fail in any case, but the proposed changes also invalidate the associatedProposal
.PR checklist
.changelog
(we use unclog to manage our changelog)docs/
orspec/
) and code comments