Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full archive peer shard mapper #5337

Conversation

sstanculeanu
Copy link
Contributor

Reasoning behind the pull request

  • new peer shard mapper needed for the new messenger

Proposed changes

  • created and integrated a new peer shard mapper

Testing procedure

  • will be tested with the full feat branch

Pre-requisites

Based on the Contributing Guidelines the PR author and the reviewers must check the following requirements are met:

  • was the PR targeted to the correct branch?
  • if this is a larger feature that probably needs more than one PR, is there a feat branch created?
  • if this is a feat branch merging, do all satellite projects have a proper tag inside go.mod?

small refactor on antiflood components creation
created a new full history peer shard mapper
create new interceptors for heartbeat to feed the new peer shard mapper
@sstanculeanu sstanculeanu self-assigned this Jun 12, 2023
Base automatically changed from full_archive_heartbeat_sender to feat/multiple_p2p_messengers June 14, 2023 09:54
@sstanculeanu sstanculeanu marked this pull request as ready for review June 14, 2023 09:56
@iulianpascalau iulianpascalau self-requested a review June 14, 2023 10:04
@ssd04 ssd04 self-requested a review June 14, 2023 13:48
Comment on lines 66 to 67
peerDenialEvaluator, err := preparePeerDenialEvaluators(networkComponents, processComponents)
if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we don't need to return full achive peer denial evaluator and set it when creating NewNode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope.. the peer denial evaluator at node level is used only to check if a peer is denied, which relays on 2 shared components for both regular denial evaluator and full archive one.. so the response would be the same

return bicf.container.Add(identifierHeartbeat, interceptor)
}

func (bicf *baseInterceptorsContainerFactory) createOneHeartbeatV2Interceptor(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func (bicf *baseInterceptorsContainerFactory) createOneHeartbeatV2Interceptor(
func (bicf *baseInterceptorsContainerFactory) createHeartbeatV2Interceptor(

?

return bicf.container.Add(identifier, interceptor)
}

func (bicf *baseInterceptorsContainerFactory) createOnePeerShardInterceptor(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func (bicf *baseInterceptorsContainerFactory) createOnePeerShardInterceptor(
func (bicf *baseInterceptorsContainerFactory) createPeerShardInterceptor(

?

common/disabled/cache.go Outdated Show resolved Hide resolved
epochStart/bootstrap/fromLocalStorage.go Outdated Show resolved Hide resolved
epochStart/bootstrap/process.go Outdated Show resolved Hide resolved
epochStart/bootstrap/storageProcess.go Outdated Show resolved Hide resolved
factory/processing/processComponents.go Outdated Show resolved Hide resolved
@@ -53,7 +53,8 @@ func createTestProcessorNodeAndTrieStorage(
TrieStore: mainStorer,
GasScheduleMap: createTestGasMap(),
})
_ = node.Messenger.CreateTopic(common.ConsensusTopic+node.ShardCoordinator.CommunicationIdentifier(node.ShardCoordinator.SelfId()), true)
_ = node.MainMessenger.CreateTopic(common.ConsensusTopic+node.ShardCoordinator.CommunicationIdentifier(node.ShardCoordinator.SelfId()), true)
_ = node.FullArchiveMessenger.CreateTopic(common.ConsensusTopic+node.ShardCoordinator.CommunicationIdentifier(node.ShardCoordinator.SelfId()), true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not required

If this doesn't panic, do we need to close all messengers on the defer functions? (valid on all occurrences)
Maybe create a function on the test processor node that will close all messengers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test processor node already has a Close() method that closes all messengers.. updated tests to call it

common/constants.go Outdated Show resolved Hide resolved
node/nodeHelper.go Outdated Show resolved Hide resolved
node/nodeHelper.go Outdated Show resolved Hide resolved
@@ -254,7 +254,7 @@ func testNodeRequestInterceptTrieNodesWithMessengerNotSyncingShouldErr(t *testin
go func() {
// sudden close of the resolver node after just 2 seconds
time.Sleep(time.Second * 2)
_ = nResolver.Messenger.Close()
_ = nResolver.MainMessenger.Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_ = nResolver.MainMessenger.Close()
nResolver.Close()

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update all tests

@codecov
Copy link

codecov bot commented Jun 15, 2023

Codecov Report

Patch coverage: 83.41% and project coverage change: +0.07 🎉

Comparison is base (c28cf02) 80.26% compared to head (cbeb97d) 80.34%.

❗ Current head cbeb97d differs from pull request most recent head 11cdfc9. Consider uploading reports for the commit 11cdfc9 to get more accurate results

Additional details and impacted files
@@                       Coverage Diff                        @@
##           feat/multiple_p2p_messengers    #5337      +/-   ##
================================================================
+ Coverage                         80.26%   80.34%   +0.07%     
================================================================
  Files                               695      695              
  Lines                             90244    90370     +126     
================================================================
+ Hits                              72432    72604     +172     
+ Misses                            12631    12599      -32     
+ Partials                           5181     5167      -14     
Impacted Files Coverage Δ
epochStart/bootstrap/fromLocalStorage.go 49.15% <0.00%> (-1.14%) ⬇️
factory/network/networkComponentsHandler.go 71.65% <0.00%> (ø)
...tiflood/factory/p2pAntifloodAndBlacklistFactory.go 73.98% <0.00%> (-3.12%) ⬇️
node/nodeHelper.go 71.11% <59.37%> (-6.84%) ⬇️
factory/processing/processComponentsHandler.go 81.00% <63.63%> (-0.30%) ⬇️
epochStart/bootstrap/process.go 84.31% <74.28%> (-0.52%) ⬇️
factory/network/networkComponents.go 87.95% <78.12%> (+1.98%) ⬆️
...ptorscontainer/baseInterceptorsContainerFactory.go 77.51% <87.83%> (+2.77%) ⬆️
factory/processing/processComponents.go 82.38% <94.91%> (-0.12%) ⬇️
epochStart/bootstrap/common.go 100.00% <100.00%> (ø)
... and 5 more

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

}

return container, nil
if args.NodeOperationMode == p2p.FullArchiveMode {
err = interceptorsContainerFactory.AddShardTrieNodeInterceptors(fullArchiveContainer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be left as it is but we do not support trie sync in an arbitrary epoch. Debatable if we will support at a certain time.

@sstanculeanu sstanculeanu merged commit 1e22b2d into feat/multiple_p2p_messengers Jun 16, 2023
4 checks passed
@sstanculeanu sstanculeanu deleted the process_components_for_fullarchive_messenger branch June 16, 2023 09:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants