-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Catch up filter headers and block headers in parallel #98
Catch up filter headers and block headers in parallel #98
Conversation
90bea83
to
fa55fcf
Compare
blockmanager.go
Outdated
b.newHeadersSignal.L.Lock() | ||
for !b.IsCurrent() { | ||
for !(b.IsCurrent() || b.filterHeaderTip+wire.CFCheckptInterval <= b.headerTip) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
already discussed irl, but it might be worth reversing the order (again) to avoid db hits amap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reversed.
fa55fcf
to
c08b16e
Compare
Now waits 100 CF intervalse before starting catch up. |
6aa7b26
to
973863b
Compare
Reverted back to waiting only one interval. Instead the latest commit will fetch the filter checkpoints up to the latest block checkpoint, avoiding doing this fetch every time the block headers advance. This means that we will reuse these checkpoints until |
@@ -225,7 +226,14 @@ func newBlockManager(s *ChainService) (*blockManager, error) { | |||
if err != nil { | |||
return nil, err | |||
} | |||
bm.filterHeaderTipHash = header.BlockHash() | |||
|
|||
// We must also ensure the the filter header tip hash is set to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Tested this locally with multiple syncs from both a remote and local node, didn't run into any major issues. Ensured that it was able to proceed after forced restarts, and seemed to handle the started no problem. However, I think there're a few places where a quit signal isn't being properly threaded through, as I noticed after some attempts, I was unable to kill the process. |
When do you experience you are unable to kill the process? Haven't been able to reproduce it myself, but a hunch I had was that one of the loops where we are waiting for the This PR shouldn't change the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🥑
Ready to go in after a rebase!
3bd50c0
to
1c339ed
Compare
Since we are only handling one filter type, we can simplify the code by making it clearly synchronous.
…val behind This commit makes the cfhandler start fetching checkpointed filter headers imemdiately when they are lagging at least a CF checkpoint interval behind the blockheaders.
This commit makes the fetching of filter checkpoints go up to the hash of the latest chain checkpoints in case the block headers haven't caught up that far. This helps in terms of avoiding doing a lot of filter checkpoint fetches up to small heights while the block headers are catching up.
1c339ed
to
3fb5213
Compare
Rebased. |
@halseth perhaps the source of the inability to quit while syncing is a result of using |
I think it should successfully quit after the sleep timeout is over, but I agree that's cleaner. Will add :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🕺
Catch up filter headers and block headers in parallel.
Also includes a fix for rollback of filter headers, and a proper fix to what was attempted in #95.