-
Notifications
You must be signed in to change notification settings - Fork 973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revamp the Sharding Client Entry Points / Architecture #126
Comments
Yes. I agree this is the next logical issue/PR to tackle. Once this is done, we can begin service implementations for |
Yeah agreed this is a good first step in order to tackle the issues raised in #122. What are the advantages of using a flag like |
I agree with refactoring to support a modular node framework. Our existing code relies on a running geth node to connect to. These changes will make the sharding actors run as independent and self contained nodes. |
Ok so here's how I've been approaching this: In func shardingNode(ctx *cli.Context) error {
// configures a sharding-enabled node using the cli's context.
shardingNode := sharding.NewNode(ctx)
return shardingNode.Start()
} Then, Service RegistrationThen, within this We define the Notary and Proposer services as protocols and, in their initialization, they setup a few different goroutines within their // Within the Notary/Proposer protocols' .Start() functions...
go protocolManager.StartGethRPC()
go protocolManager.StartP2P()
go protocolManager.StartMainLoop()
What This AchievesThis architecture is similar to what geth currently does to setup full/light nodes. This achieves separation of concerns between the sharding node and its underlying services. That is, everything related to notaries is contained within the notary package, and the notary will be responsible for handling the lifecycle of all the goroutines specific to it. Same goes for proposers. Additionally, this allows us to define a nice My concern with this architecture is perhaps repeating myself a bit with the code between protocols, which is why I suggested having a single ProtocolManager interface that specifies common methods for both, with logic that can be overwritten. However, both will need access to SMC bindings, which the shardingNode.Register(func() {
return notary.NewNotary(shardingNode.ctx, shardingNode.SMCHandler)
}) This could keep things nice and abstract as we currently have them, but still trying to understand the best way to do this. Thoughts on this? @prestonvanloon @terenc3t @nisdas @Magicking @enriquefynn? |
I'm a fan of attaching services as protocols to each actor (proposer/notary). What should a client do if he just wants to be an observer? In that case he wouldn't register |
Yeah this model allows for us to easily do this via cli flags. If client type is not set then the node just becomes an observer in this case. |
Hi all,
This is an issue that expands upon #122 to restructure our sharding client effectively. We need to leverage the first-class concurrency Go offers and allow for more modularity in the services attached to our running clients.
This is a very big topic requiring extensive discussion and design, so I propose a simple PR to get things started.
Requirements
--nodetype="proposer"
or--nodetype="notary"
as a cli flagstartShardingClient
option that does the following:RegisterEthService
does so ingo-ethereum/cmd/utils/flags.go
depending on the command line flag: in this case, proposer or notaryService
interface that satisfy certain methods such as.Start()
and.Stop()
.I can take hold of this PR and I'll keep it simple.
As discussed in #122, this approach would allow for the sharding client instance to manage the lifecycle of its services without needing to be aware of how they function underneath the hood.
Once these requirements are done, we can wrap up this issue. Then, we can begin exploring the
Notary
andProposer
service implementations in greater detail in separate issues and PR's, analyzing the event loops they will require as well as their p2p requirements.Let me know your thoughts.
The text was updated successfully, but these errors were encountered: