Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NRD rules and "recent" kernel pos index #3302

Merged
merged 53 commits into from Jun 10, 2020
Merged

Conversation

@antiochp
Copy link
Member

@antiochp antiochp commented Apr 18, 2020

The new NRD kernel variant was introduced in #3303.

This PR adds the kernel_pos index to track "recent" NRD kernel history for efficient validation of the NRD relative lock height rule.


LMDB index impl

  • add linked list implementation (lmdb based)
  • add kernel_pos index based on linked list implementation
    • push latest kernel pos to head of list
    • peek latest kernel pos
    • pop latest kernel to support rewind

NRD rule validation

  • apply_block (during block processing pipeline):
    • push NRD kernels to index
    • validate NRD relative lock height rule
    • test coverage for NRD rule validation
    • ensure rewind of kernel_pos index when rewinding blocks
  • rewind kernel_pos index when rewinding "single block" during block processing
    • test coverage of rewind scenario
  • validate NRD relative lock heights when processing transactions in stempool/txpool
    • test coverage for NRD rule validation

Txpool

  • pool tests currently "mock" out the chain impl which complicates testing of NRD rules (see #3342)
  • pool currently identifies txs based on kernel (and kernel hash) and this does not play nicely with NRD kernel reuse (ok for now but should be revisited)

Rebuild Index

  • rebuild kernel_pos index on fast sync
  • rebuild kernel_pos index on node init

Fast Sync

  • validate NRD kernels across full historic kernel set
@antiochp antiochp force-pushed the antiochp:kernel_pos branch from f957521 to f41cb4b Apr 20, 2020
@lehnberg lehnberg mentioned this pull request Apr 20, 2020
6 of 13 tasks complete
@antiochp antiochp force-pushed the antiochp:kernel_pos branch from f41cb4b to 6e6e498 May 14, 2020
@antiochp antiochp added this to the 4.0.0 milestone May 31, 2020
@antiochp antiochp marked this pull request as ready for review May 31, 2020
@antiochp antiochp requested a review from ignopeverell May 31, 2020
@antiochp antiochp requested review from quentinlesceller and yeastplume Jun 9, 2020
// Process the header first.
// If invalid then fail early.
// If valid then continue with block processing with header_head committed to db etc.
self.process_block_header(&b.header, opts)?;

This comment has been minimized.

@tromp

tromp Jun 9, 2020
Contributor

in what cases would we call process_block_single without having validated/processed the header already?
(I would naively expect never)
Does header validation not commit it to db?

This comment has been minimized.

@antiochp

antiochp Jun 10, 2020
Author Member

In most cases we will have processed "header first".
The first broadcast hop from a miner after mining a new block is done as a "full block" and skips the "header first" propagation as we know nobody has seen this block yet.
In this case the receiving node is here in the code, processing the new block without having seen the header yet.

We were validating the header in the later call to pipe::process_block().
We want to validate/process the header here, in a db transaction and a write to the MMR backend files, before we begin processing/validating the full block.

The issue was that if the full block failed validation for any reason then the entire db transaction rolled back, including the header processing. But leaving the header MMR updated with the latest header.

@antiochp
Copy link
Member Author

@antiochp antiochp commented Jun 10, 2020

Merging this to master.
We plan on continuing testing this on master across mainnet, floonet and usertesting to keep things as simple as possible.

@antiochp antiochp merged commit 20e5c19 into mimblewimble:master Jun 10, 2020
10 checks passed
10 checks passed
@azure-pipelines
mimblewimble.grin Build #20200609.2 succeeded
Details
@azure-pipelines
mimblewimble.grin (linux api/util/store) linux api/util/store succeeded
Details
@azure-pipelines
mimblewimble.grin (linux chain/core/keychain) linux chain/core/keychain succeeded
Details
@azure-pipelines
mimblewimble.grin (linux pool/p2p/src) linux pool/p2p/src succeeded
Details
@azure-pipelines
mimblewimble.grin (linux release) linux release succeeded
Details
@azure-pipelines
mimblewimble.grin (linux servers) linux servers succeeded
Details
@azure-pipelines
mimblewimble.grin (macos release) macos release succeeded
Details
@azure-pipelines
mimblewimble.grin (macos test) macos test succeeded
Details
@azure-pipelines
mimblewimble.grin (windows release) windows release succeeded
Details
@azure-pipelines
mimblewimble.grin (windows test) windows test succeeded
Details
// Start at 2 here to differentiate from ListWrapperVariant above.
Head = 2,
Tail = 3,
Middle = 4,

This comment has been minimized.

@tromp

tromp Jun 11, 2020
Contributor

Do we really need a linked list with 5 different variants?
Doesn't the database readily support maps from 64-bit ints to 64-bit ints?
We could map the MMR leaf position of an NRD-kernel to the next lower leaf position of an NRD-kernel with the same top k bits, if any. Besides that, we have a 2^k entry map giving the largest MMR leaf position of any NRD-kernel with those top k bits. I guess we could start with k=8 and it would take a long time before NRD kernel usage increases to the point where we'd want to migrate to k=16 in order to keep the lists relatively short.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants