Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: improve block add where many orphan chain tips existed #5763

Merged

Conversation

hansieodendaal
Copy link
Contributor

@hansieodendaal hansieodendaal commented Sep 11, 2023

Description

Improved block add due to previous inefficient retrieval of the strongest orphan chain header.

The previous algo retrieved all orphan blocks for all orphan chain tips, without evaluating the total accumulated difficulty. This PR added total accumulated difficulty to the orphan chain tips db so that efficient retrieval of only the strongest orphan chain headers can be done.

Block add times improved drastically by multiple orders of magnitude in the case where many orphan chain tips existed as only one or very seldomly two orphan chain tips would have the same total accumulated difficulty, thereby minimizing the retrieval of orphan blocks to enable the construction of orphan chain headers.

These symptoms were aggravated when candidate blocks were solved quicker than the base node could process them, resulting in a negative spiral of adding even more orphan chain tips when corresponding slowing down of blocks that were added to the blockchain.

Motivation and Context

See above

How Has This Been Tested?

New unit test added
System level testing on Igor

What process can a PR reviewer use to test or verify this change?

Code walk through and system level testing

Breaking Changes

  • [] None
  • Requires data directory on base node to be deleted
  • Requires hard fork
  • Other - Please specify

BREAKING CHANGE: The blockchain database should be deleted and then re-synced from the network.

@github-actions
Copy link

Test Results (CI)

1 201 tests   1 201 ✔️  10m 19s ⏱️
     38 suites         0 💤
       1 files           0

Results for commit f145b1b.

@ghpbot-tari-project ghpbot-tari-project added P-acks_required Process - Requires more ACKs or utACKs P-reviews_required Process - Requires a review from a lead maintainer to be merged labels Sep 11, 2023
@github-actions
Copy link

Test Results (Integration tests)

  2 files  +  2  11 suites  +11   22m 39s ⏱️ + 22m 39s
27 tests +27  26 ✔️ +26  0 💤 ±0  1 +1 
28 runs  +28  27 ✔️ +27  0 💤 ±0  1 +1 

For more details on these failures, see this check.

Results for commit f145b1b. ± Comparison against base commit 45c20a3.

Copy link
Collaborator

@SWvheerden SWvheerden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having read-write-read-write locks is going to be worse for performance than just having a single write lock.

While you can have multiple read locks, as soon as you ask for a single write lock, you hold on all new locks and then continue one with that single write lock after all reads are done. The read lock instructions are soo fast its not a real problem to just use the write logic.
The code is now much more complex for I would argue slower performance, as we specifically care about single block add performance, this will not be slower on average as you pause processing of the block to await locks.

base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
@@ -1891,8 +1931,21 @@ fn handle_possible_reorg<T: BlockchainBackend>(
chain_strength_comparer: &dyn ChainStrengthComparer,
candidate_block: Arc<Block>,
) -> Result<BlockAddResult, ChainStorageError> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe not related to this PR, but BlockAddREsult doesn't make sense here to return. For example, we return OrphanBlock when we are on the strongest chain

let all_orphan_tips = db.fetch_all_orphan_chain_tips()?;
if all_orphan_tips.is_empty() {
let strongest_orphan_tips = db.fetch_strongest_orphan_chain_tips()?;
if strongest_orphan_tips.is_empty() {
// we have no orphan chain tips, we have trimmed remaining headers, we are on the best tip we have, so lets
// return ok
return Ok(BlockAddResult::OrphanBlock);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment, but this is a strange thing to return IMO

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its the correct thing to return.
If all the orphans tips are empty. It means the block you added is an orphan, hence the result here.
New blocks are first added as an orphan, and then from there, an orphan chain tip is created if you can link it to the main chain. Then do swap to the highest chain. So any new block will always be an orphan chain tip first.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @SWvheerden on this

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you understand my point. The method name "swap_to_highest_chain" suggests it can be called at any time, but the value returned is tightly coupled to the caller context.... i.e. it is returning a critical result that the caller "handle_possible_reorg" requires. this is a private method, so it's probably fine, but it's a code smell that could potentially cause a bug in future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue created: #5765

base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
Improved block add due to previous inefficient retrieval of the strongest
orphan chain header.

The previous algo retrieved all orphan blocks for all
orphan chain tips, without evaluating the total accumulated difficulty.
This PR added total accumulated difficulty to the orphan chain tips db
so that efficient retrieval of only the strongest orphan chain headers
can be done.

Block add times improved drastically by multiple orders of magnitude in the
case where many orphan chain tips existed as only one or very seldomly two
orphan chan tips would have the same total accumulated difficulty, thereby
minimizing the retrival of orphan blocks to enable construction of
orphan chain headers.
let all_orphan_tips = db.fetch_all_orphan_chain_tips()?;
if all_orphan_tips.is_empty() {
let strongest_orphan_tips = db.fetch_strongest_orphan_chain_tips()?;
if strongest_orphan_tips.is_empty() {
// we have no orphan chain tips, we have trimmed remaining headers, we are on the best tip we have, so lets
// return ok
return Ok(BlockAddResult::OrphanBlock);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you understand my point. The method name "swap_to_highest_chain" suggests it can be called at any time, but the value returned is tightly coupled to the caller context.... i.e. it is returning a critical result that the caller "handle_possible_reorg" requires. this is a private method, so it's probably fine, but it's a code smell that could potentially cause a bug in future.

@@ -552,6 +552,7 @@ mod test {
};
let hash = block_header.merge_mining_hash();
append_merge_mining_tag(&mut block, hash).unwrap();
#[allow(clippy::redundant_clone)]
let mut block_header2 = block_header.clone();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut block_header2 = block_header.clone();
let mut block_header2 = block_header;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The clone and #allow is to satisfy cargo clippy --all-targets --all-features -- -D warnings

@@ -552,6 +552,7 @@ mod test {
};
let hash = block_header.merge_mining_hash();
append_merge_mining_tag(&mut block, hash).unwrap();
#[allow(clippy::redundant_clone)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#[allow(clippy::redundant_clone)]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^

@SWvheerden SWvheerden merged commit 19b3f21 into tari-project:development Sep 13, 2023
12 checks passed
@hansieodendaal hansieodendaal deleted the ho_improve_block_add branch September 14, 2023 08:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P-acks_required Process - Requires more ACKs or utACKs P-reviews_required Process - Requires a review from a lead maintainer to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants