Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamp the Sharding Client Entry Points / Architecture #126

Closed
rauljordan opened this issue May 18, 2018 · 6 comments
Closed

Revamp the Sharding Client Entry Points / Architecture #126

rauljordan opened this issue May 18, 2018 · 6 comments
Assignees
Milestone

Comments

@rauljordan
Copy link
Contributor

Hi all,

This is an issue that expands upon #122 to restructure our sharding client effectively. We need to leverage the first-class concurrency Go offers and allow for more modularity in the services attached to our running clients.

This is a very big topic requiring extensive discussion and design, so I propose a simple PR to get things started.

Requirements

  • Refactor the entry point of the sharding command to instead take --nodetype="proposer" or --nodetype="notary" as a cli flag
  • Main entry point will launch a startShardingClient option that does the following:
    • Sets up basic all the config options for a sharding client in a simple and concise manner
    • Registers all services required by the sharding client, similar to how how RegisterEthService does so in go-ethereum/cmd/utils/flags.go depending on the command line flag: in this case, proposer or notary
  • Setup Notary and Proposer as implementations of a Service interface that satisfy certain methods such as .Start() and .Stop().

I can take hold of this PR and I'll keep it simple.

As discussed in #122, this approach would allow for the sharding client instance to manage the lifecycle of its services without needing to be aware of how they function underneath the hood.

Once these requirements are done, we can wrap up this issue. Then, we can begin exploring the Notary and Proposer service implementations in greater detail in separate issues and PR's, analyzing the event loops they will require as well as their p2p requirements.

Let me know your thoughts.

@rauljordan rauljordan self-assigned this May 18, 2018
@rauljordan rauljordan added this to To do in Validator Client via automation May 18, 2018
@rauljordan rauljordan added this to the Ruby milestone May 18, 2018
@terencechain
Copy link
Member

Yes. I agree this is the next logical issue/PR to tackle. Once this is done, we can begin service implementations for notary and proposer. It's gonna be fun! Let me know if you need help with the PR

@nisdas
Copy link
Member

nisdas commented May 19, 2018

Yeah agreed this is a good first step in order to tackle the issues raised in #122. What are the advantages of using a flag like --nodetype="proposer" instead of sharding-proposer for the entrypoint for sharding ?

@prestonvanloon
Copy link
Member

I agree with refactoring to support a modular node framework. Our existing code relies on a running geth node to connect to. These changes will make the sharding actors run as independent and self contained nodes.

@rauljordan
Copy link
Contributor Author

Ok so here's how I've been approaching this:

In /cmd/geth/shardingcmd.go

func shardingNode(ctx *cli.Context) error {
	// configures a sharding-enabled node using the cli's context.
	shardingNode := sharding.NewNode(ctx)
	return shardingNode.Start()
}

Then, sharding.NewNode has the responsibility of registering all the config options of the cli, registering different sharding services, and managing their lifecycle.

Service Registration

Then, within this NewNode func, we register a few services depending on the cli flags. If utils.flags.ClientType is set to notary, then we register a NewNotary service. Otherwise, we register a NewProposer service. All of the services attached to this sharding node are managed and initialized within the node's .Start() function.

We define the Notary and Proposer services as protocols and, in their initialization, they setup a few different goroutines within their .Start() functions. Each of these sets up a ProtocolManager struct that satisfies an interface specifying access to p2p networking specific to the functionality of the client type, access to SMC bindings, and more. Then, three event loops started:

// Within the Notary/Proposer protocols' .Start() functions...
go protocolManager.StartGethRPC()
go protocolManager.StartP2P()
go protocolManager.StartMainLoop()

StartGethRPC() sets up a connection to a Geth node via an IPC connection and handles the logic of setting up the SMC on the Geth node as well as setting up the bindings to allow each protocol to call functions on the SMC.

StartP2P() will handle all of the shardp2p peer discovery, requests/responses to and from other nodes speaking the same protocol (either notary or proposer).

StartMainLoop() will handle the logic of being a notary or proposer depending on the protocol at hand. For notaries, this involves checking if the notary was selected as eligible in a period and more. For proposers, this involves listening to incoming transactions and processing them into collations that will then be submitted to the SMC via SMC bindings created and attached to the proposer's ProtocolManager.

What This Achieves

This architecture is similar to what geth currently does to setup full/light nodes. This achieves separation of concerns between the sharding node and its underlying services. That is, everything related to notaries is contained within the notary package, and the notary will be responsible for handling the lifecycle of all the goroutines specific to it. Same goes for proposers.

Additionally, this allows us to define a nice Service interface and a ProtocolManager interface that allows us to be extensible to further changes in the research or if we want to add something new to our sharding client down the line without major rafactoring.

My concern with this architecture is perhaps repeating myself a bit with the code between protocols, which is why I suggested having a single ProtocolManager interface that specifies common methods for both, with logic that can be overwritten. However, both will need access to SMC bindings, which the go protocolManager.StartGethRPC() will handle. Could I instead abstract this one level higher at the sharding node level? Then, when registering a Notary protocol as a service to the sharding client, we could pass in a handler for SMC-related logic as an argument like:

shardingNode.Register(func() {
  return notary.NewNotary(shardingNode.ctx, shardingNode.SMCHandler)
})

This could keep things nice and abstract as we currently have them, but still trying to understand the best way to do this.

Thoughts on this? @prestonvanloon @terenc3t @nisdas @Magicking @enriquefynn?

@terencechain
Copy link
Member

I'm a fan of attaching services as protocols to each actor (proposer/notary). What should a client do if he just wants to be an observer? In that case he wouldn't register NewNotary or NewProposer service, but still needs to StartP2P protocolManager.StartP2P(). The observer will also wanna goto a specific shard, we can get the shard number from cli.

@rauljordan
Copy link
Contributor Author

Yeah this model allows for us to easily do this via cli flags. If client type is not set then the node just becomes an observer in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

4 participants