Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use shutdown height as sync stop height #11373

Merged
merged 5 commits into from
May 22, 2024

Conversation

Longarithm
Copy link
Member

If one wants to take old backup to run some blocks on top of its head, node will first run header sync, and its default behaviour is to download all headers from chain. If backup is couple days old, this takes a while.

The solution is to use expected_shutdown config, which will stop header sync after shutdown height is reached and let the node download and process blocks. I tested this on my node, it worked nicely, allowing to change shutdown height on the fly.

@Longarithm Longarithm requested a review from wacban May 21, 2024 18:05
@Longarithm Longarithm requested a review from a team as a code owner May 21, 2024 18:05
Copy link
Contributor

@wacban wacban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea but expected_shutdown is taken already and has a different meaning. Can you make this a separate mutable config?

@@ -147,8 +154,10 @@ impl HeaderSync {
self.syncing_peer = None;
// Pick a new random peer to request the next batch of headers.
if let Some(peer) = highest_height_peers.choose(&mut thread_rng()).cloned() {
// TODO: This condition should always be true, otherwise we can already complete header sync.
if peer.highest_block_height > header_head.height {
let shutdown_height = self.shutdown_height.get();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I would put the unwrap_or here.

@Longarithm
Copy link
Member Author

I like the idea but expected_shutdown is taken already and has a different meaning. Can you make this a separate mutable config?

I don't introduce new parameter, I actually reuse existing expected_shutdown from config.json.
I think it is perfect usecase, because we don't want to sync after it anyway.

Copy link

codecov bot commented May 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 71.28%. Comparing base (899cb30) to head (4e6cbf5).
Report is 10 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #11373      +/-   ##
==========================================
+ Coverage   71.09%   71.28%   +0.18%     
==========================================
  Files         783      784       +1     
  Lines      156826   157682     +856     
  Branches   156826   157682     +856     
==========================================
+ Hits       111492   112400     +908     
+ Misses      40515    40437      -78     
- Partials     4819     4845      +26     
Flag Coverage Δ
backward-compatibility 0.24% <0.00%> (-0.01%) ⬇️
db-migration 0.24% <0.00%> (-0.01%) ⬇️
genesis-check 1.38% <0.00%> (-0.01%) ⬇️
integration-tests 37.18% <73.33%> (+0.04%) ⬆️
linux 68.76% <100.00%> (-0.05%) ⬇️
linux-nightly 70.73% <100.00%> (+0.20%) ⬆️
macos 52.35% <100.00%> (+0.13%) ⬆️
pytests 1.60% <0.00%> (-0.01%) ⬇️
sanity-checks 1.39% <0.00%> (-0.01%) ⬇️
unittests 65.67% <100.00%> (+0.16%) ⬆️
upgradability 0.29% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@wacban wacban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Longarithm Longarithm added this pull request to the merge queue May 22, 2024
@wacban
Copy link
Contributor

wacban commented May 22, 2024

I like the idea but expected_shutdown is taken already and has a different meaning. Can you make this a separate mutable config?

I don't introduce new parameter, I actually reuse existing expected_shutdown from config.json. I think it is perfect usecase, because we don't want to sync after it anyway.

Gotcha, my idea was to have one config for one meaning. Then you have more control over it as you can set each independently. Anyway since it's only for debugging I'm totally fine with this as is.

Merged via the queue into near:master with commit 26a32a9 May 22, 2024
28 of 29 checks passed
@Longarithm Longarithm deleted the shutdown branch May 22, 2024 09:32
marcelo-gonzalez pushed a commit to marcelo-gonzalez/nearcore that referenced this pull request May 23, 2024
If one wants to take old backup to run some blocks on top of its head,
node will first run header sync, and its default behaviour is to
download **all** headers from chain. If backup is couple days old, this
takes a while.

The solution is to use `expected_shutdown` config, which will stop
header sync after shutdown height is reached and let the node download
and process blocks. I tested this on my node, it worked nicely, allowing
to change shutdown height on the fly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants