Skip to content

validation: sync chainstate to disk after syncing to tip #15218

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

andrewtoth
Copy link
Contributor

@andrewtoth andrewtoth commented Jan 20, 2019

When finishing syncing the chainstate to tip, the chainstate is not persisted to disk until 24 hours after startup. This can cause an issue where the unpersisted chainstate must be resynced if bitcoind is not cleanly shut down. If using a large enough dbcache, it's possible the entire chainstate from genesis would have to be resynced.

This fixes the issue by persisting the chainstate to disk right after syncing to tip, but not clearing the utxo cache (using the Sync method introduced in #17487). This happens by scheduling a call to the new function SyncCoinsTipAfterChainSync every 30 seconds. This function checks that the node is out of IBD, and then checks if no new block has been added since the last call. Finally, it checks that there are no blocks currently being downloaded by peers. If all these conditions are met, then the chainstate is persisted and the function is no longer scheduled.

Mitigates #11600.

@laanwj
Copy link
Member

laanwj commented Jan 21, 2019

Concept ACK, but I think IsInitialBlockDownload is the wrong place to implement this, as it's a query function, having it suddenly spawn a thread that flushes is unexpected.

Would be better to implement it closer to the validation logic and database update logic itself.

@andrewtoth
Copy link
Contributor Author

@laanwj Good point. I refactored to move this behaviour to ActivateBestChain in an area where periodic flushes are already expected.

@laanwj
Copy link
Member

laanwj commented Jan 22, 2019

@laanwj Good point. I refactored to move this behaviour to ActivateBestChain in an area where periodic flushes are already expected.

Thanks, much better!

@sdaftuar
Copy link
Member

I'm not really a fan of this change -- the problem described in #11600 is from an unclean shutdown (ie system crash), where our recovery code could take a long time (but typically would be much faster than doing a -reindex to recover, which is how our code used to work).

This change doesn't really solve that problem, it just changes the window in which an unclean shutdown could occur (reducing it at most by 24 hours). But extra flushes, particularly during initial sync, aren't obviously a good idea, since they harm performance. (Note that we leave IBD before we've synced all the way to the tip, I think once we're within a day or two?)

Because we flush every day anyway, it's hard for me to say that this is really that much worse, performance-wise (after all we don't currently support a node configuration where the utxo is kept entirely cached). But I'm not sure this solves anything either, and a change like this would have to be reverted if, for instance, we wanted to make the cache actually more useful on startup (something I've thought we should do for a while). So I think I'm a -0 on this change.

@andrewtoth
Copy link
Contributor Author

andrewtoth commented Jan 23, 2019

@sdaftuar This change also greatly improves the common workflow of spinning up a high performance instance to sync, then immediately shutting it down and using a cheaper one. Currently, you have to enter it and do a clean shutdown instead of just terminating. Similarly, when syncing to an external drive, you can now just unplug the drive or turn off the machine when finished.

I would argue that moving the window to 0 hours directly after initial sync is an objective improvement. There is a lot of data that will be lost directly after, so why risk another 24 hours? After that, the most they will lose is 24 hours worth of rolling back, instead of 10 years. Also, this change does not do any extra flushes during initial sync, only after.

I can't speak to your last point about changing the way we use the cache, since I don't know what your ideas are.

@sdaftuar
Copy link
Member

Currently, you have to enter it and do a clean shutdown instead of just terminating.

@andrewtoth We already support this (better, I think) with the -stopatheight argument, no?

I don't really view data that is in memory as "at risk"; I view it as a massive performance optimization that will allow a node to process new blocks at the fastest possible speed while the data hasn't yet been flushed. I also don't feel very strongly about this for the reasons I gave above, so if others want this behavior then so be it.

@sipa
Copy link
Member

sipa commented Jan 23, 2019

@sdaftuar Maybe this is a bit of a different discussion, but there is another option; namely supporting flushing the dirty state to disk, but without wiping it from the cache. Based on our earlier benchmarking, we wouldn't want to do this purely for maximizing IBD performance, but it could be done at specific times to minimize losses in case of crashes (the once per day flush for example, and also this IBD-is-finshed one).

@sdaftuar
Copy link
Member

@sipa Agreed, I think that would make a lot more sense as a first pass optimization for the periodic flushes and would also work better for this purpose as well.

@gmaxwell
Copy link
Contributor

. Currently, you have to enter it and do a clean shutdown instead of just terminating.

Well with this, if you "just terminate" you're going to end up with a replay of several days blocks at start, which is still ugly, even if less bad via this.

Aside, actually if you actually shut off the computer any time during IBD you'll likely completely corrupt the state and need to reindex because we don't use fsync during IBD for performance reasons.

We really need to get background writing going, so that our writes are never more than (say) a week of blocktime behind... but that is a much bigger change, so I don't suggest "just do that instead", though it would make the change here completely unnecessary.

Might it be better to trigger the flush the first time it goes 30 seconds without connecting a block and there are no queued transfers, from the scheduler thread?

@andrewtoth
Copy link
Contributor Author

andrewtoth commented Jan 25, 2019

@andrewtoth We already support this (better, I think) with the -stopatheight argument, no?

@sdaftuar Ahh, I never considered using that for this purpose. Thanks!

@gmaxwell It might still be ugly to have a replay of a few days, but much better than making everything unusable for hours.

There are comments from several people in this PR about adding background writing and writing dirty state to disk without wiping the cache. This change wouldn't affect either of those improvements, and is an improvement by itself in the interim.

As for moving this to the scheduler thread, I think this is better since it happens in a place where periodic flushes are already expected Also, checking every 30 seconds for a new block wouldn't work if for instance the network cuts out for a few minutes.

@sipa
Copy link
Member

sipa commented Jan 25, 2019

@andrewtoth The problem is that right now, causing a flush when exiting IBD will (temporarily) kill your performance right before finishing the sync (because it leaves you with an empty cache). If instead it was a non-clearing flush, there would be no such downside.

@sdaftuar
Copy link
Member

My experiment in #15265 has changed my view on this a bit -- now I think that we might as well make a change like this for now, but should change the approach slightly to do something like @gmaxwell's proposal so that we don't trigger the flush before we are done syncing:

Might it be better to trigger the flush the first time it goes 30 seconds without connecting a block and there are no queued transfers, from the scheduler thread?

@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from f1be35e to 442db9d Compare February 10, 2019 20:29
@andrewtoth
Copy link
Contributor Author

andrewtoth commented Feb 10, 2019

@sdaftuar @gmaxwell I've updated this to check every 30 seconds on the scheduler thread if there has been an update to the active chain height. This only actually checks after IsInitialBlockDownload is false, which happens if latest block is within a day of the current time.

I'm not sure how to check if there are queued transfers. If this is not sufficient, some guidance on how to do that would be appreciated.

@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from 79a9ed2 to 3abbfb0 Compare February 11, 2019 01:43
Copy link
Contributor

@mzumsande mzumsande left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

While this one-time sync after IBD should help in some situations, I'm not sure that it completely resolves #11600 (I encountered this PR while looking into possible improvements to ReplayBlocks())
After all, there are several other situations in which a crash / unclean shutdown could lead to extensive replays (e.g. during IBD) that this PR doesn't address.

@DrahtBot
Copy link
Contributor

DrahtBot commented Jun 3, 2024

🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.

Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.

Leave a comment here, if you need help tracking down a confusing failure.

Debug: https://github.com/bitcoin/bitcoin/runs/25710459287

@andrewtoth
Copy link
Contributor Author

@mzumsande @chrisguida thank you for your reviews and suggestions. I've addressed them and rebased.

Comment on lines +1120 to +1127
LOCK(node.chainman->GetMutex());
if (node.chainman->IsInitialBlockDownload()) {
LogDebug(BCLog::COINDB, "Node is still in IBD, rescheduling post-IBD chainstate disk sync...\n");
node.scheduler->scheduleFromNow([&node] {
SyncCoinsTipAfterChainSync(node);
}, SYNC_CHECK_INTERVAL);
return;
}
Copy link
Member

@furszy furszy Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to lock the chainman mutex for IsInitialBlockDownload(). The function already locks it internally.

Still, I think we shouldn't use that. The more we lock cs_main, the more unresponsive the software is. Could use a combination of peerman.ApproximateBestBlockDepth() with a constant like we do inside the desirable services flags variation (GetDesirableServiceFlags). Or the peerman m_initial_sync_finished field.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point taken for moving the explicit lock after this check, since the lock is taken in IsInitialBlockDownload().

However, this check only runs once every 30 seconds. I don't see how it could possibly affect responsiveness of the software. It is a very fast check I would assume on the order of microseconds every 30 seconds.

Comment on lines +1131 to +1137
if (last_chain_height != current_height) {
LogDebug(BCLog::COINDB, "Chain height updated since last check, rescheduling post-IBD chainstate disk sync...\n");
last_chain_height = current_height;
node.scheduler->scheduleFromNow([&node] {
SyncCoinsTipAfterChainSync(node);
}, SYNC_CHECK_INTERVAL);
return;
Copy link
Member

@furszy furszy Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this going to always reschedule the task on the first run?

Also, the active height refers to the latest connected block. It doesn't tell us we are up-to-date with the network; To know if we are sync, should use the best known header or call to the ApproximateBestBlockDepth() function.

And thinking more about this; what about adjusting the check interval based on the distance between the active chain height and the best header height?

I know this could vary a lot but.. something simple like: "if the node is more than 400k blocks away, wait 5 or 10 minutes, if it is 100k blocks away wait 3 or 5 minutes, and if it less than that, wait 1 minute" would save a good number of unneeded checks in slow machines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this going to always reschedule the task on the first run?

Yes, but do you think this is a problem? It just makes sure the node has not connected any nodes for at least 30 seconds.

Also, the active height refers to the latest connected block. It doesn't tell us we are up-to-date with the network; To know if we are sync, should use the best known header or call to the ApproximateBestBlockDepth() function.

Doesn't the fact that IsInitialBlockDownload() returns false make this point moot? It checks that our latest block is at most 24 hours old.

And let's say this call is triggered before we are completely up-to-date with the network. All that happens is the chainstate is synced to disk, but the utxo cache is not cleared. So at most 24 hours of blocks (~144 blocks) will be downloaded and processed (quickly still with the cache), but not persisted to disk until the next periodic flush (24 hours). I think this patch still achieves its goal and there is no downside now with Sync.

would save a good number of unneeded checks in slow machines.

I think this is premature optimization. I don't think this check will be noticeable by system or user.

Copy link
Contributor Author

@andrewtoth andrewtoth Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would save a good number of unneeded checks in slow machines.

@furszy Let's say on a slow machine IBD takes 48 hours to sync, and being generous this check takes 10ms (I think in reality it would be more than 2 orders of magnitude faster), then the total number of checks is 48h * 60 minutes * 2 (twice a minute) = 5,760 checks * 10ms = 57.6 seconds. So on a 48 hour sync with an excessively slow check it will still be less than a minute extra time added.

Copy link
Member

@furszy furszy Jul 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. np.

Still.. I know it is an overkill if we only introduce what I'm going to suggest for this PR but.. thought on adding a signal for the ibd completion state furszy@85a050a. Which might be useful if we ever add any other scenario apart from this one.

luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Jun 13, 2024
luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Jun 13, 2024
Copy link
Member

@furszy furszy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code ACK 8887d28

Comment on lines +1131 to +1137
if (last_chain_height != current_height) {
LogDebug(BCLog::COINDB, "Chain height updated since last check, rescheduling post-IBD chainstate disk sync...\n");
last_chain_height = current_height;
node.scheduler->scheduleFromNow([&node] {
SyncCoinsTipAfterChainSync(node);
}, SYNC_CHECK_INTERVAL);
return;
Copy link
Member

@furszy furszy Jul 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. np.

Still.. I know it is an overkill if we only introduce what I'm going to suggest for this PR but.. thought on adding a signal for the ibd completion state furszy@85a050a. Which might be useful if we ever add any other scenario apart from this one.

@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Debug: https://github.com/bitcoin/bitcoin/runs/26084905920

Hints

Make sure to run all tests locally, according to the documentation.

The failure may happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

@andrewtoth
Copy link
Contributor Author

Closing in favor of #30611.

@andrewtoth andrewtoth closed this Aug 8, 2024
@andrewtoth andrewtoth deleted the flush-after-ibd branch August 15, 2024 02:18
luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Feb 22, 2025
luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Feb 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.