Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atomically persist blocks, headers, & tries #1279

Merged
merged 2 commits into from Sep 21, 2018

Conversation

carver
Copy link
Contributor

@carver carver commented Sep 12, 2018

What was wrong?

We can end up with inconsistent state when shutting down trinity (say, a header persisted, but the canonical head not updated, or missing the transaction or uncle data, etc). This is because we persist all database writes immediately.

How was it fixed?

Persist batches of writes atomically.

This used to be one massive pull, which were peeled off into:

Now this PR just does final step: it uses the new db.atomic_batch() API to force all writes to atomically succeed (or fail) in three places:

  • persisting a block
  • persisting a trie data dict
  • persisting a chain of headers

There are probably other places that would benefit from doing an atomic batch (both for performance and correctness reasons), but these were the lowest hanging fruit.

This introduced several shadow methods in the chain and header dbs, that accept a BaseDB as an argument, so we can pass in the atomic batch for writing. They were moved to classmethod so that no one accidentally adds a reference to self.db going forward.

Cute Animal Picture

put a cute animal picture link inside the parentheses

@carver carver changed the title batch block persist to avoid inconsistent state [WIP] batch block persist to avoid inconsistent state Sep 12, 2018
@carver
Copy link
Contributor Author

carver commented Sep 12, 2018

I decided it's better to just do this the right way, and get real levelDB transaction atomic commits. So this is still a WIP.

@carver carver changed the title [WIP] batch block persist to avoid inconsistent state Atomic leveldb database writes Sep 13, 2018
@carver
Copy link
Contributor Author

carver commented Sep 13, 2018

Wrapping up for the day. Last thing I'd want to do is detect and blow up if the batch_write() context was entered twice (unless leveldb supports it, in which case, I might try to add support into BatchableDB)

@carver
Copy link
Contributor Author

carver commented Sep 13, 2018

All of the prep work, and type changing is really just to support 13211f0 and 0f42713, which is relatively small, but a huge win IMO.

Edit: Argh, just saw the CI failures. Re-marking WIP :(

@carver carver changed the title Atomic leveldb database writes [WIP] Atomic leveldb database writes Sep 13, 2018
@carver carver mentioned this pull request Sep 13, 2018
@carver
Copy link
Contributor Author

carver commented Sep 13, 2018

This is a little big. I'll peel it off into parts, starting with #1295

@carver
Copy link
Contributor Author

carver commented Sep 14, 2018

... and onto #1296

@carver
Copy link
Contributor Author

carver commented Sep 18, 2018

and #1304 -- which fixes some conceptual issues that will slightly change the approach in this PR: it will require that header and chain db can write to an explicitly provided database handle, instead of their internal self.db, when desired.

@carver carver changed the title [WIP] Atomic leveldb database writes Atomic leveldb database writes Sep 19, 2018
@carver carver changed the title Atomic leveldb database writes Atomically persist blocks, headers, & tries Sep 19, 2018
@cburgdorf
Copy link
Contributor

Looks good to me 👍 It's a shame that our benchmark suite still entirely relies on the MemoryDB which pretty much eliminates any chance to see the perf impact of this.

@carver carver merged commit 4be8cbb into ethereum:master Sep 21, 2018
@carver carver deleted the batch-block-persist branch September 21, 2018 20:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants