Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consensus: Add (*MidState).forEachRevertedElementLeaf #187

Merged
merged 3 commits into from
Aug 7, 2024

Conversation

lukechampine
Copy link
Member

When iterating over the elements in a MidState, we need to be careful with regard to contract revisions. When applying a block, we want to report the latest revision; but when reverting, we want to report the earliest. Unfortunately, this asymmetry was overlooked, leading to a rather pernicious bug: when a block containing a revision is reverted, the Merkle tree is still contains the revision, not the original contract.

It's pernicious because A) reorgs are fairly uncommon, and B) the reorg has to contain a revision, and also C) redundancy in the DB code means that the invalid hashes may be overwritten by valid ones, masking the problem. That's likely why we only discovered this bug after adding a sanity check that validates every block supplement. Sure, our consensus tests are a bit lacking when it comes to file contracts -- I don't think there are any involving both a revision and a reorg! -- but the nature of this bug makes me wonder if it would have eluded detection anyway.

If you've been running a core node long enough to observe a reorg, there's a decent chance your Merkle tree is (very slightly) wrong. This can probably be fixed by updating and then manually reverting and re-applying a lot of blocks... but resyncing from genesis is the best way to be sure your state is ok. 🙃

@n8maninger
Copy link
Member

n8maninger commented Aug 7, 2024

Should add a test with create -> apply -> revert -> apply for all the different elements and validate the accumulator state after each

Copy link
Member

@ChrisSchinnerl ChrisSchinnerl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there is a nice way to validate the accumulator after every update in test builds. To get that implicit sanity check both here as well as part of renterd and hostd integration tests.

@lukechampine
Copy link
Member Author

Should add a test with create -> apply -> revert -> apply for all the different elements and validate the accumulator state after each

Sounds like @chris124567 will be writing helpers for this shortly.

I wonder if there is a nice way to validate the accumulator after every update in test builds.

You could do something like this, at the bottom of consensus.ApplyBlock and consensus.RevertBlock:

for _, sce := range ms.sces {
	if !s.Elements.containsLeaf(siacoinLeaf(&sce, ms.isSpent(sce.ID))) {
		panic("consensus: siacoin element not found in accumulator after apply")
	}
}
for _, sfe := range ms.sfes {
	// ... etc ...

As you know, I am deathly allergic to build tags, but maybe we could stick a flag in consensus.Network? 🤔

In any case, I'm in favor of merging this fix now, and addressing the rest in a follow-up.

@n8maninger n8maninger merged commit 41ed190 into master Aug 7, 2024
8 checks passed
@n8maninger n8maninger deleted the revert-revision branch August 7, 2024 20:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants